00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1062 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3724 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.065 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.067 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.092 Fetching changes from the remote Git repository 00:00:00.095 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.137 Using shallow fetch with depth 1 00:00:00.137 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.137 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.235 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.235 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.851 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.863 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.874 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.874 > git config core.sparsecheckout # timeout=10 00:00:03.886 > git read-tree -mu HEAD # timeout=10 00:00:03.902 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.923 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.923 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.024 [Pipeline] Start of Pipeline 00:00:04.038 [Pipeline] library 00:00:04.039 Loading library shm_lib@master 00:00:04.039 Library shm_lib@master is cached. Copying from home. 00:00:04.059 [Pipeline] node 00:00:04.071 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.073 [Pipeline] { 00:00:04.084 [Pipeline] catchError 00:00:04.085 [Pipeline] { 00:00:04.096 [Pipeline] wrap 00:00:04.103 [Pipeline] { 00:00:04.110 [Pipeline] stage 00:00:04.112 [Pipeline] { (Prologue) 00:00:04.302 [Pipeline] sh 00:00:04.586 + logger -p user.info -t JENKINS-CI 00:00:04.603 [Pipeline] echo 00:00:04.605 Node: WFP4 00:00:04.613 [Pipeline] sh 00:00:04.920 [Pipeline] setCustomBuildProperty 00:00:04.931 [Pipeline] echo 00:00:04.933 Cleanup processes 00:00:04.938 [Pipeline] sh 00:00:05.222 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.222 675146 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.235 [Pipeline] sh 00:00:05.522 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.523 ++ grep -v 'sudo pgrep' 00:00:05.523 ++ awk '{print $1}' 00:00:05.523 + sudo kill -9 00:00:05.523 + true 00:00:05.534 [Pipeline] cleanWs 00:00:05.542 [WS-CLEANUP] Deleting project workspace... 00:00:05.542 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.548 [WS-CLEANUP] done 00:00:05.551 [Pipeline] setCustomBuildProperty 00:00:05.561 [Pipeline] sh 00:00:05.842 + sudo git config --global --replace-all safe.directory '*' 00:00:05.913 [Pipeline] httpRequest 00:00:06.253 [Pipeline] echo 00:00:06.254 Sorcerer 10.211.164.20 is alive 00:00:06.263 [Pipeline] retry 00:00:06.265 [Pipeline] { 00:00:06.277 [Pipeline] httpRequest 00:00:06.281 HttpMethod: GET 00:00:06.282 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.282 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.294 Response Code: HTTP/1.1 200 OK 00:00:06.294 Success: Status code 200 is in the accepted range: 200,404 00:00:06.294 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.144 [Pipeline] } 00:00:07.164 [Pipeline] // retry 00:00:07.171 [Pipeline] sh 00:00:07.455 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.469 [Pipeline] httpRequest 00:00:07.868 [Pipeline] echo 00:00:07.870 Sorcerer 10.211.164.20 is alive 00:00:07.878 [Pipeline] retry 00:00:07.879 [Pipeline] { 00:00:07.892 [Pipeline] httpRequest 00:00:07.897 HttpMethod: GET 00:00:07.897 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:07.898 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:07.918 Response Code: HTTP/1.1 200 OK 00:00:07.919 Success: Status code 200 is in the accepted range: 200,404 00:00:07.919 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:38.579 [Pipeline] } 00:00:38.597 [Pipeline] // retry 00:00:38.605 [Pipeline] sh 00:00:38.895 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:41.445 [Pipeline] sh 00:00:41.732 + git -C spdk log --oneline -n5 00:00:41.733 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:41.733 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:41.733 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:41.733 66289a6db build: use VERSION file for storing version 00:00:41.733 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:41.751 [Pipeline] withCredentials 00:00:41.762 > git --version # timeout=10 00:00:41.776 > git --version # 'git version 2.39.2' 00:00:41.794 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:41.797 [Pipeline] { 00:00:41.806 [Pipeline] retry 00:00:41.808 [Pipeline] { 00:00:41.823 [Pipeline] sh 00:00:42.109 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:42.381 [Pipeline] } 00:00:42.399 [Pipeline] // retry 00:00:42.404 [Pipeline] } 00:00:42.421 [Pipeline] // withCredentials 00:00:42.431 [Pipeline] httpRequest 00:00:42.858 [Pipeline] echo 00:00:42.860 Sorcerer 10.211.164.20 is alive 00:00:42.871 [Pipeline] retry 00:00:42.873 [Pipeline] { 00:00:42.888 [Pipeline] httpRequest 00:00:42.893 HttpMethod: GET 00:00:42.893 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:42.894 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:42.901 Response Code: HTTP/1.1 200 OK 00:00:42.902 Success: Status code 200 is in the accepted range: 200,404 00:00:42.902 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:36.917 [Pipeline] } 00:01:36.934 [Pipeline] // retry 00:01:36.941 [Pipeline] sh 00:01:37.225 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:38.615 [Pipeline] sh 00:01:38.901 + git -C dpdk log --oneline -n5 00:01:38.901 eeb0605f11 version: 23.11.0 00:01:38.901 238778122a doc: update release notes for 23.11 00:01:38.901 46aa6b3cfc doc: fix description of RSS features 00:01:38.901 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:38.901 7e421ae345 devtools: support skipping forbid rule check 00:01:38.911 [Pipeline] } 00:01:38.924 [Pipeline] // stage 00:01:38.933 [Pipeline] stage 00:01:38.935 [Pipeline] { (Prepare) 00:01:38.954 [Pipeline] writeFile 00:01:38.970 [Pipeline] sh 00:01:39.255 + logger -p user.info -t JENKINS-CI 00:01:39.268 [Pipeline] sh 00:01:39.553 + logger -p user.info -t JENKINS-CI 00:01:39.565 [Pipeline] sh 00:01:39.854 + cat autorun-spdk.conf 00:01:39.854 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.854 SPDK_TEST_NVMF=1 00:01:39.854 SPDK_TEST_NVME_CLI=1 00:01:39.854 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.854 SPDK_TEST_NVMF_NICS=e810 00:01:39.854 SPDK_TEST_VFIOUSER=1 00:01:39.854 SPDK_RUN_UBSAN=1 00:01:39.854 NET_TYPE=phy 00:01:39.854 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:39.854 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.861 RUN_NIGHTLY=1 00:01:39.866 [Pipeline] readFile 00:01:39.889 [Pipeline] withEnv 00:01:39.891 [Pipeline] { 00:01:39.904 [Pipeline] sh 00:01:40.191 + set -ex 00:01:40.191 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:40.191 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:40.191 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.191 ++ SPDK_TEST_NVMF=1 00:01:40.191 ++ SPDK_TEST_NVME_CLI=1 00:01:40.191 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.191 ++ SPDK_TEST_NVMF_NICS=e810 00:01:40.191 ++ SPDK_TEST_VFIOUSER=1 00:01:40.191 ++ SPDK_RUN_UBSAN=1 00:01:40.191 ++ NET_TYPE=phy 00:01:40.191 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:40.191 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.191 ++ RUN_NIGHTLY=1 00:01:40.191 + case $SPDK_TEST_NVMF_NICS in 00:01:40.191 + DRIVERS=ice 00:01:40.191 + [[ tcp == \r\d\m\a ]] 00:01:40.191 + [[ -n ice ]] 00:01:40.191 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:40.191 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:40.191 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:40.191 rmmod: ERROR: Module i40iw is not currently loaded 00:01:40.191 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:40.191 + true 00:01:40.191 + for D in $DRIVERS 00:01:40.191 + sudo modprobe ice 00:01:40.191 + exit 0 00:01:40.200 [Pipeline] } 00:01:40.214 [Pipeline] // withEnv 00:01:40.219 [Pipeline] } 00:01:40.231 [Pipeline] // stage 00:01:40.240 [Pipeline] catchError 00:01:40.242 [Pipeline] { 00:01:40.255 [Pipeline] timeout 00:01:40.255 Timeout set to expire in 1 hr 0 min 00:01:40.257 [Pipeline] { 00:01:40.270 [Pipeline] stage 00:01:40.272 [Pipeline] { (Tests) 00:01:40.286 [Pipeline] sh 00:01:40.572 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.572 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.572 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.572 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:40.572 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.572 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:40.572 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:40.572 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:40.572 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:40.572 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:40.572 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:40.572 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.572 + source /etc/os-release 00:01:40.572 ++ NAME='Fedora Linux' 00:01:40.572 ++ VERSION='39 (Cloud Edition)' 00:01:40.572 ++ ID=fedora 00:01:40.572 ++ VERSION_ID=39 00:01:40.572 ++ VERSION_CODENAME= 00:01:40.572 ++ PLATFORM_ID=platform:f39 00:01:40.572 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:40.572 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:40.572 ++ LOGO=fedora-logo-icon 00:01:40.572 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:40.572 ++ HOME_URL=https://fedoraproject.org/ 00:01:40.572 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:40.572 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:40.572 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:40.572 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:40.572 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:40.572 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:40.572 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:40.572 ++ SUPPORT_END=2024-11-12 00:01:40.572 ++ VARIANT='Cloud Edition' 00:01:40.572 ++ VARIANT_ID=cloud 00:01:40.572 + uname -a 00:01:40.573 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:40.573 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:43.110 Hugepages 00:01:43.110 node hugesize free / total 00:01:43.110 node0 1048576kB 0 / 0 00:01:43.110 node0 2048kB 0 / 0 00:01:43.110 node1 1048576kB 0 / 0 00:01:43.110 node1 2048kB 0 / 0 00:01:43.110 00:01:43.110 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:43.110 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:43.110 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:43.110 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:43.110 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:43.110 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:43.110 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:43.110 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:43.110 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:43.110 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:43.110 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:43.110 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:43.110 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:43.110 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:43.110 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:43.110 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:43.110 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:43.110 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:43.110 + rm -f /tmp/spdk-ld-path 00:01:43.110 + source autorun-spdk.conf 00:01:43.110 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.110 ++ SPDK_TEST_NVMF=1 00:01:43.110 ++ SPDK_TEST_NVME_CLI=1 00:01:43.110 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.110 ++ SPDK_TEST_NVMF_NICS=e810 00:01:43.110 ++ SPDK_TEST_VFIOUSER=1 00:01:43.110 ++ SPDK_RUN_UBSAN=1 00:01:43.110 ++ NET_TYPE=phy 00:01:43.110 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:43.110 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.110 ++ RUN_NIGHTLY=1 00:01:43.110 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:43.110 + [[ -n '' ]] 00:01:43.110 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.110 + for M in /var/spdk/build-*-manifest.txt 00:01:43.110 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:43.110 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.110 + for M in /var/spdk/build-*-manifest.txt 00:01:43.110 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:43.111 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.111 + for M in /var/spdk/build-*-manifest.txt 00:01:43.111 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:43.111 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.111 ++ uname 00:01:43.111 + [[ Linux == \L\i\n\u\x ]] 00:01:43.111 + sudo dmesg -T 00:01:43.370 + sudo dmesg --clear 00:01:43.370 + dmesg_pid=676634 00:01:43.370 + [[ Fedora Linux == FreeBSD ]] 00:01:43.370 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.370 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.370 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:43.370 + [[ -x /usr/src/fio-static/fio ]] 00:01:43.370 + export FIO_BIN=/usr/src/fio-static/fio 00:01:43.370 + sudo dmesg -Tw 00:01:43.370 + FIO_BIN=/usr/src/fio-static/fio 00:01:43.370 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:43.370 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:43.370 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:43.370 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.370 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.370 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:43.370 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.370 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.370 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.370 16:15:13 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:43.370 16:15:13 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.370 16:15:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.370 16:15:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:43.370 16:15:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:43.370 16:15:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.370 16:15:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:43.370 16:15:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:43.370 16:15:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:43.370 16:15:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:43.370 16:15:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:43.370 16:15:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.370 16:15:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:43.370 16:15:13 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:43.370 16:15:13 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.370 16:15:13 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:43.370 16:15:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:43.370 16:15:13 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:43.370 16:15:13 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:43.370 16:15:13 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.370 16:15:13 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.370 16:15:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.370 16:15:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.370 16:15:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.370 16:15:13 -- paths/export.sh@5 -- $ export PATH 00:01:43.370 16:15:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.370 16:15:13 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:43.370 16:15:13 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:43.370 16:15:13 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734189313.XXXXXX 00:01:43.370 16:15:13 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734189313.PBhjbv 00:01:43.370 16:15:13 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:43.370 16:15:13 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:01:43.370 16:15:13 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.370 16:15:13 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:43.370 16:15:13 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:43.370 16:15:13 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:43.370 16:15:13 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:43.370 16:15:13 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:43.370 16:15:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.370 16:15:13 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:43.370 16:15:13 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:43.370 16:15:13 -- pm/common@17 -- $ local monitor 00:01:43.370 16:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.370 16:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.370 16:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.370 16:15:13 -- pm/common@21 -- $ date +%s 00:01:43.370 16:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.370 16:15:13 -- pm/common@21 -- $ date +%s 00:01:43.370 16:15:13 -- pm/common@25 -- $ sleep 1 00:01:43.370 16:15:13 -- pm/common@21 -- $ date +%s 00:01:43.370 16:15:13 -- pm/common@21 -- $ date +%s 00:01:43.370 16:15:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734189313 00:01:43.370 16:15:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734189313 00:01:43.371 16:15:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734189313 00:01:43.371 16:15:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734189313 00:01:43.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734189313_collect-vmstat.pm.log 00:01:43.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734189313_collect-cpu-load.pm.log 00:01:43.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734189313_collect-cpu-temp.pm.log 00:01:43.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734189313_collect-bmc-pm.bmc.pm.log 00:01:44.568 16:15:14 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:44.568 16:15:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:44.569 16:15:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:44.569 16:15:14 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.569 16:15:14 -- spdk/autobuild.sh@16 -- $ date -u 00:01:44.569 Sat Dec 14 03:15:14 PM UTC 2024 00:01:44.569 16:15:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:44.569 v25.01-rc1-2-ge01cb43b8 00:01:44.569 16:15:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:44.569 16:15:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:44.569 16:15:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:44.569 16:15:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:44.569 16:15:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:44.569 16:15:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.569 ************************************ 00:01:44.569 START TEST ubsan 00:01:44.569 ************************************ 00:01:44.569 16:15:14 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:44.569 using ubsan 00:01:44.569 00:01:44.569 real 0m0.000s 00:01:44.569 user 0m0.000s 00:01:44.569 sys 0m0.000s 00:01:44.569 16:15:14 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:44.569 16:15:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.569 ************************************ 00:01:44.569 END TEST ubsan 00:01:44.569 ************************************ 00:01:44.569 16:15:14 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:44.569 16:15:14 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:44.569 16:15:14 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:44.569 16:15:14 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:44.569 16:15:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:44.569 16:15:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.569 ************************************ 00:01:44.569 START TEST build_native_dpdk 00:01:44.569 ************************************ 00:01:44.569 16:15:14 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:44.569 eeb0605f11 version: 23.11.0 00:01:44.569 238778122a doc: update release notes for 23.11 00:01:44.569 46aa6b3cfc doc: fix description of RSS features 00:01:44.569 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:44.569 7e421ae345 devtools: support skipping forbid rule check 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:44.569 patching file config/rte_config.h 00:01:44.569 Hunk #1 succeeded at 60 (offset 1 line). 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:01:44.569 patching file lib/pcapng/rte_pcapng.c 00:01:44.569 16:15:14 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:44.569 16:15:14 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:44.570 16:15:14 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:44.570 16:15:14 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:01:44.570 16:15:14 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:01:44.570 16:15:14 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:01:44.570 16:15:14 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:01:44.570 16:15:14 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:48.764 The Meson build system 00:01:48.764 Version: 1.5.0 00:01:48.764 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:48.764 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:48.764 Build type: native build 00:01:48.764 Program cat found: YES (/usr/bin/cat) 00:01:48.764 Project name: DPDK 00:01:48.764 Project version: 23.11.0 00:01:48.764 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:48.764 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:48.764 Host machine cpu family: x86_64 00:01:48.764 Host machine cpu: x86_64 00:01:48.764 Message: ## Building in Developer Mode ## 00:01:48.764 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.764 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:48.764 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.764 Program python3 found: YES (/usr/bin/python3) 00:01:48.764 Program cat found: YES (/usr/bin/cat) 00:01:48.764 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:48.764 Compiler for C supports arguments -march=native: YES 00:01:48.764 Checking for size of "void *" : 8 00:01:48.764 Checking for size of "void *" : 8 (cached) 00:01:48.764 Library m found: YES 00:01:48.764 Library numa found: YES 00:01:48.764 Has header "numaif.h" : YES 00:01:48.764 Library fdt found: NO 00:01:48.764 Library execinfo found: NO 00:01:48.764 Has header "execinfo.h" : YES 00:01:48.764 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:48.764 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.764 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.764 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.764 Run-time dependency openssl found: YES 3.1.1 00:01:48.764 Run-time dependency libpcap found: YES 1.10.4 00:01:48.764 Has header "pcap.h" with dependency libpcap: YES 00:01:48.764 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.764 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.764 Compiler for C supports arguments -Wformat: YES 00:01:48.764 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.764 Compiler for C supports arguments -Wformat-security: NO 00:01:48.764 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.765 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.765 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.765 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.765 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.765 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.765 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.765 Compiler for C supports arguments -Wundef: YES 00:01:48.765 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.765 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.765 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.765 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.765 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.765 Program objdump found: YES (/usr/bin/objdump) 00:01:48.765 Compiler for C supports arguments -mavx512f: YES 00:01:48.765 Checking if "AVX512 checking" compiles: YES 00:01:48.765 Fetching value of define "__SSE4_2__" : 1 00:01:48.765 Fetching value of define "__AES__" : 1 00:01:48.765 Fetching value of define "__AVX__" : 1 00:01:48.765 Fetching value of define "__AVX2__" : 1 00:01:48.765 Fetching value of define "__AVX512BW__" : 1 00:01:48.765 Fetching value of define "__AVX512CD__" : 1 00:01:48.765 Fetching value of define "__AVX512DQ__" : 1 00:01:48.765 Fetching value of define "__AVX512F__" : 1 00:01:48.765 Fetching value of define "__AVX512VL__" : 1 00:01:48.765 Fetching value of define "__PCLMUL__" : 1 00:01:48.765 Fetching value of define "__RDRND__" : 1 00:01:48.765 Fetching value of define "__RDSEED__" : 1 00:01:48.765 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.765 Fetching value of define "__znver1__" : (undefined) 00:01:48.765 Fetching value of define "__znver2__" : (undefined) 00:01:48.765 Fetching value of define "__znver3__" : (undefined) 00:01:48.765 Fetching value of define "__znver4__" : (undefined) 00:01:48.765 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.765 Message: lib/log: Defining dependency "log" 00:01:48.765 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.765 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.765 Checking for function "getentropy" : NO 00:01:48.765 Message: lib/eal: Defining dependency "eal" 00:01:48.765 Message: lib/ring: Defining dependency "ring" 00:01:48.765 Message: lib/rcu: Defining dependency "rcu" 00:01:48.765 Message: lib/mempool: Defining dependency "mempool" 00:01:48.765 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.765 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.765 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.765 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.765 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.765 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.765 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:48.765 Compiler for C supports arguments -mpclmul: YES 00:01:48.765 Compiler for C supports arguments -maes: YES 00:01:48.765 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.765 Compiler for C supports arguments -mavx512bw: YES 00:01:48.765 Compiler for C supports arguments -mavx512dq: YES 00:01:48.765 Compiler for C supports arguments -mavx512vl: YES 00:01:48.765 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.765 Compiler for C supports arguments -mavx2: YES 00:01:48.765 Compiler for C supports arguments -mavx: YES 00:01:48.765 Message: lib/net: Defining dependency "net" 00:01:48.765 Message: lib/meter: Defining dependency "meter" 00:01:48.765 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.765 Message: lib/pci: Defining dependency "pci" 00:01:48.765 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.765 Message: lib/metrics: Defining dependency "metrics" 00:01:48.765 Message: lib/hash: Defining dependency "hash" 00:01:48.765 Message: lib/timer: Defining dependency "timer" 00:01:48.765 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.765 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.765 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:48.765 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.765 Message: lib/acl: Defining dependency "acl" 00:01:48.765 Message: lib/bbdev: Defining dependency "bbdev" 00:01:48.765 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:48.765 Run-time dependency libelf found: YES 0.191 00:01:48.765 Message: lib/bpf: Defining dependency "bpf" 00:01:48.765 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:48.765 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.765 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.765 Message: lib/distributor: Defining dependency "distributor" 00:01:48.765 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.765 Message: lib/efd: Defining dependency "efd" 00:01:48.765 Message: lib/eventdev: Defining dependency "eventdev" 00:01:48.765 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:48.765 Message: lib/gpudev: Defining dependency "gpudev" 00:01:48.765 Message: lib/gro: Defining dependency "gro" 00:01:48.765 Message: lib/gso: Defining dependency "gso" 00:01:48.765 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:48.765 Message: lib/jobstats: Defining dependency "jobstats" 00:01:48.765 Message: lib/latencystats: Defining dependency "latencystats" 00:01:48.765 Message: lib/lpm: Defining dependency "lpm" 00:01:48.765 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.765 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.765 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:48.765 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:48.765 Message: lib/member: Defining dependency "member" 00:01:48.765 Message: lib/pcapng: Defining dependency "pcapng" 00:01:48.765 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.765 Message: lib/power: Defining dependency "power" 00:01:48.765 Message: lib/rawdev: Defining dependency "rawdev" 00:01:48.765 Message: lib/regexdev: Defining dependency "regexdev" 00:01:48.765 Message: lib/mldev: Defining dependency "mldev" 00:01:48.765 Message: lib/rib: Defining dependency "rib" 00:01:48.765 Message: lib/reorder: Defining dependency "reorder" 00:01:48.765 Message: lib/sched: Defining dependency "sched" 00:01:48.765 Message: lib/security: Defining dependency "security" 00:01:48.765 Message: lib/stack: Defining dependency "stack" 00:01:48.765 Has header "linux/userfaultfd.h" : YES 00:01:48.765 Has header "linux/vduse.h" : YES 00:01:48.765 Message: lib/vhost: Defining dependency "vhost" 00:01:48.765 Message: lib/ipsec: Defining dependency "ipsec" 00:01:48.765 Message: lib/pdcp: Defining dependency "pdcp" 00:01:48.765 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.765 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.765 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.765 Message: lib/fib: Defining dependency "fib" 00:01:48.765 Message: lib/port: Defining dependency "port" 00:01:48.765 Message: lib/pdump: Defining dependency "pdump" 00:01:48.765 Message: lib/table: Defining dependency "table" 00:01:48.765 Message: lib/pipeline: Defining dependency "pipeline" 00:01:48.765 Message: lib/graph: Defining dependency "graph" 00:01:48.765 Message: lib/node: Defining dependency "node" 00:01:48.765 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.676 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.676 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.676 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.676 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:50.676 Compiler for C supports arguments -Wno-unused-value: YES 00:01:50.676 Compiler for C supports arguments -Wno-format: YES 00:01:50.676 Compiler for C supports arguments -Wno-format-security: YES 00:01:50.676 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:50.676 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:50.676 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:50.676 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:50.676 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.676 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.676 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.676 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:50.676 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:50.676 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:50.676 Has header "sys/epoll.h" : YES 00:01:50.676 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:50.676 Configuring doxy-api-html.conf using configuration 00:01:50.676 Configuring doxy-api-man.conf using configuration 00:01:50.676 Program mandb found: YES (/usr/bin/mandb) 00:01:50.676 Program sphinx-build found: NO 00:01:50.676 Configuring rte_build_config.h using configuration 00:01:50.676 Message: 00:01:50.676 ================= 00:01:50.676 Applications Enabled 00:01:50.676 ================= 00:01:50.676 00:01:50.676 apps: 00:01:50.676 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:50.676 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:50.676 test-pmd, test-regex, test-sad, test-security-perf, 00:01:50.676 00:01:50.676 Message: 00:01:50.676 ================= 00:01:50.676 Libraries Enabled 00:01:50.676 ================= 00:01:50.676 00:01:50.676 libs: 00:01:50.676 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.676 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:50.676 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:50.676 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:50.676 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:50.676 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:50.676 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:50.676 00:01:50.676 00:01:50.676 Message: 00:01:50.676 =============== 00:01:50.676 Drivers Enabled 00:01:50.676 =============== 00:01:50.676 00:01:50.676 common: 00:01:50.676 00:01:50.676 bus: 00:01:50.676 pci, vdev, 00:01:50.676 mempool: 00:01:50.676 ring, 00:01:50.676 dma: 00:01:50.676 00:01:50.676 net: 00:01:50.676 i40e, 00:01:50.676 raw: 00:01:50.676 00:01:50.676 crypto: 00:01:50.676 00:01:50.676 compress: 00:01:50.676 00:01:50.676 regex: 00:01:50.676 00:01:50.676 ml: 00:01:50.676 00:01:50.676 vdpa: 00:01:50.676 00:01:50.676 event: 00:01:50.676 00:01:50.676 baseband: 00:01:50.676 00:01:50.676 gpu: 00:01:50.676 00:01:50.676 00:01:50.676 Message: 00:01:50.676 ================= 00:01:50.676 Content Skipped 00:01:50.676 ================= 00:01:50.676 00:01:50.676 apps: 00:01:50.676 00:01:50.676 libs: 00:01:50.676 00:01:50.676 drivers: 00:01:50.676 common/cpt: not in enabled drivers build config 00:01:50.676 common/dpaax: not in enabled drivers build config 00:01:50.676 common/iavf: not in enabled drivers build config 00:01:50.676 common/idpf: not in enabled drivers build config 00:01:50.676 common/mvep: not in enabled drivers build config 00:01:50.676 common/octeontx: not in enabled drivers build config 00:01:50.676 bus/auxiliary: not in enabled drivers build config 00:01:50.676 bus/cdx: not in enabled drivers build config 00:01:50.676 bus/dpaa: not in enabled drivers build config 00:01:50.676 bus/fslmc: not in enabled drivers build config 00:01:50.676 bus/ifpga: not in enabled drivers build config 00:01:50.676 bus/platform: not in enabled drivers build config 00:01:50.676 bus/vmbus: not in enabled drivers build config 00:01:50.676 common/cnxk: not in enabled drivers build config 00:01:50.676 common/mlx5: not in enabled drivers build config 00:01:50.676 common/nfp: not in enabled drivers build config 00:01:50.676 common/qat: not in enabled drivers build config 00:01:50.676 common/sfc_efx: not in enabled drivers build config 00:01:50.676 mempool/bucket: not in enabled drivers build config 00:01:50.676 mempool/cnxk: not in enabled drivers build config 00:01:50.676 mempool/dpaa: not in enabled drivers build config 00:01:50.676 mempool/dpaa2: not in enabled drivers build config 00:01:50.676 mempool/octeontx: not in enabled drivers build config 00:01:50.676 mempool/stack: not in enabled drivers build config 00:01:50.676 dma/cnxk: not in enabled drivers build config 00:01:50.676 dma/dpaa: not in enabled drivers build config 00:01:50.676 dma/dpaa2: not in enabled drivers build config 00:01:50.676 dma/hisilicon: not in enabled drivers build config 00:01:50.676 dma/idxd: not in enabled drivers build config 00:01:50.676 dma/ioat: not in enabled drivers build config 00:01:50.676 dma/skeleton: not in enabled drivers build config 00:01:50.676 net/af_packet: not in enabled drivers build config 00:01:50.676 net/af_xdp: not in enabled drivers build config 00:01:50.676 net/ark: not in enabled drivers build config 00:01:50.676 net/atlantic: not in enabled drivers build config 00:01:50.676 net/avp: not in enabled drivers build config 00:01:50.676 net/axgbe: not in enabled drivers build config 00:01:50.676 net/bnx2x: not in enabled drivers build config 00:01:50.676 net/bnxt: not in enabled drivers build config 00:01:50.676 net/bonding: not in enabled drivers build config 00:01:50.676 net/cnxk: not in enabled drivers build config 00:01:50.676 net/cpfl: not in enabled drivers build config 00:01:50.676 net/cxgbe: not in enabled drivers build config 00:01:50.676 net/dpaa: not in enabled drivers build config 00:01:50.676 net/dpaa2: not in enabled drivers build config 00:01:50.676 net/e1000: not in enabled drivers build config 00:01:50.676 net/ena: not in enabled drivers build config 00:01:50.676 net/enetc: not in enabled drivers build config 00:01:50.676 net/enetfec: not in enabled drivers build config 00:01:50.676 net/enic: not in enabled drivers build config 00:01:50.676 net/failsafe: not in enabled drivers build config 00:01:50.676 net/fm10k: not in enabled drivers build config 00:01:50.676 net/gve: not in enabled drivers build config 00:01:50.676 net/hinic: not in enabled drivers build config 00:01:50.676 net/hns3: not in enabled drivers build config 00:01:50.676 net/iavf: not in enabled drivers build config 00:01:50.676 net/ice: not in enabled drivers build config 00:01:50.676 net/idpf: not in enabled drivers build config 00:01:50.676 net/igc: not in enabled drivers build config 00:01:50.676 net/ionic: not in enabled drivers build config 00:01:50.676 net/ipn3ke: not in enabled drivers build config 00:01:50.676 net/ixgbe: not in enabled drivers build config 00:01:50.676 net/mana: not in enabled drivers build config 00:01:50.676 net/memif: not in enabled drivers build config 00:01:50.676 net/mlx4: not in enabled drivers build config 00:01:50.676 net/mlx5: not in enabled drivers build config 00:01:50.676 net/mvneta: not in enabled drivers build config 00:01:50.676 net/mvpp2: not in enabled drivers build config 00:01:50.676 net/netvsc: not in enabled drivers build config 00:01:50.676 net/nfb: not in enabled drivers build config 00:01:50.676 net/nfp: not in enabled drivers build config 00:01:50.676 net/ngbe: not in enabled drivers build config 00:01:50.676 net/null: not in enabled drivers build config 00:01:50.676 net/octeontx: not in enabled drivers build config 00:01:50.676 net/octeon_ep: not in enabled drivers build config 00:01:50.676 net/pcap: not in enabled drivers build config 00:01:50.676 net/pfe: not in enabled drivers build config 00:01:50.677 net/qede: not in enabled drivers build config 00:01:50.677 net/ring: not in enabled drivers build config 00:01:50.677 net/sfc: not in enabled drivers build config 00:01:50.677 net/softnic: not in enabled drivers build config 00:01:50.677 net/tap: not in enabled drivers build config 00:01:50.677 net/thunderx: not in enabled drivers build config 00:01:50.677 net/txgbe: not in enabled drivers build config 00:01:50.677 net/vdev_netvsc: not in enabled drivers build config 00:01:50.677 net/vhost: not in enabled drivers build config 00:01:50.677 net/virtio: not in enabled drivers build config 00:01:50.677 net/vmxnet3: not in enabled drivers build config 00:01:50.677 raw/cnxk_bphy: not in enabled drivers build config 00:01:50.677 raw/cnxk_gpio: not in enabled drivers build config 00:01:50.677 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:50.677 raw/ifpga: not in enabled drivers build config 00:01:50.677 raw/ntb: not in enabled drivers build config 00:01:50.677 raw/skeleton: not in enabled drivers build config 00:01:50.677 crypto/armv8: not in enabled drivers build config 00:01:50.677 crypto/bcmfs: not in enabled drivers build config 00:01:50.677 crypto/caam_jr: not in enabled drivers build config 00:01:50.677 crypto/ccp: not in enabled drivers build config 00:01:50.677 crypto/cnxk: not in enabled drivers build config 00:01:50.677 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.677 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.677 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.677 crypto/mlx5: not in enabled drivers build config 00:01:50.677 crypto/mvsam: not in enabled drivers build config 00:01:50.677 crypto/nitrox: not in enabled drivers build config 00:01:50.677 crypto/null: not in enabled drivers build config 00:01:50.677 crypto/octeontx: not in enabled drivers build config 00:01:50.677 crypto/openssl: not in enabled drivers build config 00:01:50.677 crypto/scheduler: not in enabled drivers build config 00:01:50.677 crypto/uadk: not in enabled drivers build config 00:01:50.677 crypto/virtio: not in enabled drivers build config 00:01:50.677 compress/isal: not in enabled drivers build config 00:01:50.677 compress/mlx5: not in enabled drivers build config 00:01:50.677 compress/octeontx: not in enabled drivers build config 00:01:50.677 compress/zlib: not in enabled drivers build config 00:01:50.677 regex/mlx5: not in enabled drivers build config 00:01:50.677 regex/cn9k: not in enabled drivers build config 00:01:50.677 ml/cnxk: not in enabled drivers build config 00:01:50.677 vdpa/ifc: not in enabled drivers build config 00:01:50.677 vdpa/mlx5: not in enabled drivers build config 00:01:50.677 vdpa/nfp: not in enabled drivers build config 00:01:50.677 vdpa/sfc: not in enabled drivers build config 00:01:50.677 event/cnxk: not in enabled drivers build config 00:01:50.677 event/dlb2: not in enabled drivers build config 00:01:50.677 event/dpaa: not in enabled drivers build config 00:01:50.677 event/dpaa2: not in enabled drivers build config 00:01:50.677 event/dsw: not in enabled drivers build config 00:01:50.677 event/opdl: not in enabled drivers build config 00:01:50.677 event/skeleton: not in enabled drivers build config 00:01:50.677 event/sw: not in enabled drivers build config 00:01:50.677 event/octeontx: not in enabled drivers build config 00:01:50.677 baseband/acc: not in enabled drivers build config 00:01:50.677 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:50.677 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:50.677 baseband/la12xx: not in enabled drivers build config 00:01:50.677 baseband/null: not in enabled drivers build config 00:01:50.677 baseband/turbo_sw: not in enabled drivers build config 00:01:50.677 gpu/cuda: not in enabled drivers build config 00:01:50.677 00:01:50.677 00:01:50.677 Build targets in project: 217 00:01:50.677 00:01:50.677 DPDK 23.11.0 00:01:50.677 00:01:50.677 User defined options 00:01:50.677 libdir : lib 00:01:50.677 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.677 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:50.677 c_link_args : 00:01:50.677 enable_docs : false 00:01:50.677 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:50.677 enable_kmods : false 00:01:50.677 machine : native 00:01:50.677 tests : false 00:01:50.677 00:01:50.677 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.677 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:50.677 16:15:20 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:01:50.677 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:50.677 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.677 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.677 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.677 [4/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:50.677 [5/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:50.677 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:50.677 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:50.677 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.677 [9/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:50.677 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.677 [11/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:50.677 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.677 [13/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:50.677 [14/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.677 [15/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:50.677 [16/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:50.677 [17/707] Linking static target lib/librte_kvargs.a 00:01:50.944 [18/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.944 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.944 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.944 [21/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:50.944 [22/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.944 [23/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:50.944 [24/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:50.944 [25/707] Linking static target lib/librte_log.a 00:01:50.944 [26/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.944 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.944 [28/707] Linking static target lib/librte_pci.a 00:01:50.944 [29/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:50.944 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:50.944 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:50.944 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:50.944 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.209 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.209 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.209 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.209 [37/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.209 [38/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.209 [39/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.474 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.474 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.474 [42/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.474 [43/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.474 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.474 [45/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.474 [46/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.474 [47/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.474 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.474 [49/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.474 [50/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.474 [51/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.474 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.474 [53/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.474 [54/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.474 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.474 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.474 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.474 [58/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.474 [59/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.474 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.474 [61/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.474 [62/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.474 [63/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.474 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.474 [65/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.474 [66/707] Linking static target lib/librte_meter.a 00:01:51.474 [67/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.474 [68/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.474 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.474 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.474 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.474 [72/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.474 [73/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.474 [74/707] Linking static target lib/librte_ring.a 00:01:51.474 [75/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.474 [76/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.474 [77/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.474 [78/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.474 [79/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.474 [80/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.474 [81/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.474 [82/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.474 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.474 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.474 [85/707] Linking static target lib/librte_cmdline.a 00:01:51.474 [86/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.739 [87/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.739 [88/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.739 [89/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.739 [90/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.739 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.739 [92/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.739 [93/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:51.739 [94/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.740 [95/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.740 [96/707] Linking static target lib/librte_net.a 00:01:51.740 [97/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.740 [98/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.740 [99/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.740 [100/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.740 [101/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.740 [102/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.740 [103/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:51.740 [104/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:51.740 [105/707] Linking static target lib/librte_metrics.a 00:01:51.740 [106/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.740 [107/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:51.740 [108/707] Linking target lib/librte_log.so.24.0 00:01:51.740 [109/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.740 [110/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.740 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.007 [112/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.007 [113/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.007 [114/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:52.007 [115/707] Linking static target lib/librte_cfgfile.a 00:01:52.007 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.007 [117/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.007 [118/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:52.007 [119/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.007 [120/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.007 [121/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.007 [122/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.007 [123/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.007 [124/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:52.007 [125/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:52.007 [126/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.007 [127/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:52.007 [128/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.007 [129/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:52.007 [130/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:52.007 [131/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.007 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.007 [133/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.270 [134/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.270 [135/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.270 [136/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:52.270 [137/707] Linking static target lib/librte_mempool.a 00:01:52.270 [138/707] Linking target lib/librte_kvargs.so.24.0 00:01:52.270 [139/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.270 [140/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:52.270 [141/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.270 [142/707] Linking static target lib/librte_bitratestats.a 00:01:52.270 [143/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:52.270 [144/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.270 [145/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.270 [146/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:52.270 [147/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.270 [148/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.270 [149/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:52.270 [150/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.270 [151/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.270 [152/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:52.270 [153/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.270 [154/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:52.270 [155/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.270 [156/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:52.270 [157/707] Linking static target lib/librte_timer.a 00:01:52.270 [158/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:52.270 [159/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:52.270 [160/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:52.270 [161/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.534 [162/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:52.534 [163/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:52.534 [164/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:52.534 [165/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.534 [166/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.534 [167/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:52.534 [168/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:52.534 [169/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:52.534 [170/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:52.534 [171/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.534 [172/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:52.534 [173/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.534 [174/707] Linking static target lib/librte_compressdev.a 00:01:52.534 [175/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:52.534 [176/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:52.534 [177/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.534 [178/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:52.534 [179/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:52.534 [180/707] Linking static target lib/librte_jobstats.a 00:01:52.534 [181/707] Linking static target lib/librte_dispatcher.a 00:01:52.534 [182/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.534 [183/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:52.534 [184/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.534 [185/707] Linking static target lib/librte_telemetry.a 00:01:52.534 [186/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.534 [187/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.534 [188/707] Linking static target lib/librte_rcu.a 00:01:52.800 [189/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:52.800 [190/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:52.800 [191/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:52.800 [192/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.800 [193/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:52.800 [194/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:52.800 [195/707] Linking static target lib/librte_gpudev.a 00:01:52.800 [196/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:52.800 [197/707] Linking static target lib/librte_bbdev.a 00:01:52.800 [198/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.800 [199/707] Linking static target lib/librte_eal.a 00:01:52.800 [200/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:52.800 [201/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.800 [202/707] Linking static target lib/librte_dmadev.a 00:01:52.800 [203/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:52.800 [204/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:52.800 [205/707] Linking static target lib/librte_gro.a 00:01:52.800 [206/707] Linking static target lib/librte_gso.a 00:01:52.800 [207/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.800 [208/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:52.800 [209/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:52.800 [210/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:52.800 [211/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:52.800 [212/707] Linking static target lib/librte_latencystats.a 00:01:52.800 [213/707] Linking static target lib/librte_mbuf.a 00:01:52.800 [214/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:52.800 [215/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:52.800 [216/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:52.800 [217/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:52.800 [218/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:52.800 [219/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:53.069 [220/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:53.069 [221/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:53.069 [222/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.069 [223/707] Linking static target lib/librte_distributor.a 00:01:53.069 [224/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.069 [225/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:53.069 [226/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:53.069 [227/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.069 [228/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.069 [229/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:53.069 [230/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:53.069 [231/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:53.069 [232/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:53.069 [233/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:53.069 [234/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.069 [235/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:53.069 [236/707] Linking static target lib/librte_stack.a 00:01:53.069 [237/707] Linking static target lib/librte_ip_frag.a 00:01:53.069 [238/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.069 [239/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.069 [240/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.069 [241/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.069 [242/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.069 [243/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:53.069 [244/707] Linking static target lib/librte_regexdev.a 00:01:53.069 [245/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:53.069 [246/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.331 [247/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:53.331 [248/707] Linking static target lib/librte_rawdev.a 00:01:53.331 [249/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:53.331 [250/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.331 [251/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:53.331 [252/707] Linking static target lib/librte_mldev.a 00:01:53.331 [253/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:53.331 [254/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:53.331 [255/707] Linking static target lib/librte_power.a 00:01:53.331 [256/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.331 [257/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.331 [258/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:53.331 [259/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.331 [260/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.331 [261/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:53.331 [262/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:53.331 [263/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:53.331 [264/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.331 [265/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:53.331 [266/707] Linking target lib/librte_telemetry.so.24.0 00:01:53.331 [267/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:53.331 [268/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:53.331 [269/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:53.331 [270/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.331 [271/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:53.331 [272/707] Linking static target lib/librte_pcapng.a 00:01:53.598 [273/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.598 [274/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.598 [275/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.598 [276/707] Linking static target lib/librte_reorder.a 00:01:53.598 [277/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.598 [278/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.598 [279/707] Linking static target lib/librte_security.a 00:01:53.598 [280/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:53.598 [281/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:53.598 [282/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:53.598 [283/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:53.598 [284/707] Linking static target lib/librte_bpf.a 00:01:53.598 [285/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:53.598 [286/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:53.598 [287/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.598 [288/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:53.598 [289/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:53.598 [290/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:53.864 [291/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:53.864 [292/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:53.864 [293/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:53.864 [294/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:53.864 [295/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:53.864 [296/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.864 [297/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:53.864 [298/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:53.864 [299/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.864 [300/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.864 [301/707] Linking static target lib/librte_lpm.a 00:01:53.864 [302/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:53.864 [303/707] Linking static target lib/librte_efd.a 00:01:53.864 [304/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:53.864 [305/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.864 [306/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:53.864 [307/707] Linking static target lib/librte_rib.a 00:01:53.864 [308/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:53.864 [309/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.864 [310/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:54.129 [311/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:54.129 [312/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.130 [313/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.130 [314/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:54.130 [315/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:54.130 [316/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:54.130 [317/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.130 [318/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:54.130 [319/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:54.130 [320/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:54.130 [321/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:54.130 [322/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.130 [323/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:54.130 [324/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:54.130 [325/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.130 [326/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:54.130 [327/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:54.130 [328/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:54.130 [329/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:54.130 [330/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:54.130 [331/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:54.396 [332/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:54.396 [333/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:54.396 [334/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:54.396 [335/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:54.396 [336/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:54.396 [337/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:54.396 [338/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.396 [339/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:54.396 [340/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:54.396 [341/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:54.396 [342/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.396 [343/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.396 [344/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:54.396 [345/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:54.396 [346/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.396 [347/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:54.396 [348/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:54.396 [349/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:54.396 [350/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:54.660 [351/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:54.660 [352/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:54.660 [353/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:54.660 [354/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:54.660 [355/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:54.660 [356/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:54.660 [357/707] Linking static target lib/librte_fib.a 00:01:54.660 [358/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.660 [359/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:54.660 [360/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:54.660 [361/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:54.660 [362/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:54.660 [363/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:54.660 [364/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:54.660 [365/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:54.660 [366/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.660 [367/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:54.660 [368/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:54.925 [369/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:54.925 [370/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:54.925 [371/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:54.925 [372/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.925 [373/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:54.925 [374/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:54.925 [375/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.925 [376/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.925 [377/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:54.925 [378/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.925 [379/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:54.925 [380/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.925 [381/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:54.925 [382/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:54.925 [383/707] Linking static target lib/librte_graph.a 00:01:55.190 [384/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:55.190 [385/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:55.190 [386/707] Linking static target lib/librte_pdump.a 00:01:55.190 [387/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:55.190 [388/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:55.190 [389/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:55.190 [390/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:55.190 [391/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:55.190 [392/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:55.190 [393/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:55.190 [394/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:55.190 [395/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:55.190 [396/707] Linking static target lib/librte_cryptodev.a 00:01:55.190 [397/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:55.190 [398/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.190 [399/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.190 [400/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:55.190 [401/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:55.190 [402/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:55.190 [403/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:55.190 [404/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.190 [405/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:55.190 [406/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:55.460 [407/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:55.460 [408/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:55.460 [409/707] Linking static target lib/librte_sched.a 00:01:55.460 [410/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:55.460 [411/707] Linking static target lib/librte_table.a 00:01:55.460 [412/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:55.460 [413/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:55.460 [414/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:55.460 [415/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.460 [416/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:55.460 [417/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:55.460 [418/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.460 [419/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.460 [420/707] Linking static target drivers/librte_bus_vdev.a 00:01:55.460 [421/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:55.460 [422/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:55.460 [423/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:55.460 [424/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:55.460 [425/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:55.460 [426/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.460 [427/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:55.460 [428/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:55.460 [429/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.460 [430/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:55.721 [431/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:55.721 [432/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:55.721 [433/707] Linking static target lib/librte_member.a 00:01:55.721 [434/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:55.721 [435/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:55.721 [436/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:55.721 [437/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:55.721 [438/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:55.721 [439/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:55.721 [440/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:55.721 [441/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.721 [442/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.721 [443/707] Linking static target drivers/librte_bus_pci.a 00:01:55.721 [444/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:55.721 [445/707] Linking static target lib/librte_hash.a 00:01:55.721 [446/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:55.988 [447/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:55.988 [448/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:55.988 [449/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:55.988 [450/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.988 [451/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.988 [452/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:55.988 [453/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:55.988 [454/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.988 [455/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:55.988 [456/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:55.988 [457/707] Linking static target lib/librte_ipsec.a 00:01:55.988 [458/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:55.988 [459/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:55.988 [460/707] Linking static target lib/librte_pdcp.a 00:01:55.988 [461/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:55.988 [462/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.988 [463/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:55.988 [464/707] Linking static target lib/librte_node.a 00:01:55.988 [465/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:55.988 [466/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:55.988 [467/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:55.988 [468/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.988 [469/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:56.254 [470/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:56.254 [471/707] Linking static target lib/acl/libavx2_tmp.a 00:01:56.254 [472/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:56.254 [473/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:56.254 [474/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:56.254 [475/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:56.254 [476/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.254 [477/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:56.254 [478/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:56.254 [479/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:56.254 [480/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:56.254 [481/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:56.254 [482/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:56.254 [483/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:56.254 [484/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.254 [485/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.254 [486/707] Linking static target drivers/librte_mempool_ring.a 00:01:56.254 [487/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:56.254 [488/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:56.254 [489/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:56.254 [490/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:56.514 [491/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:56.514 [492/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:56.514 [493/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:56.514 [494/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:56.514 [495/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:56.514 [496/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:56.514 [497/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:56.514 [498/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:56.514 [499/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:56.514 [500/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:56.514 [501/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:56.514 [502/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:56.514 [503/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:56.514 [504/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:56.514 [505/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.514 [506/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.514 [507/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:56.514 [508/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.514 [509/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:56.514 [510/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.514 [511/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:56.514 [512/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:56.514 [513/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:56.514 [514/707] Linking static target lib/librte_port.a 00:01:56.514 [515/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:56.514 [516/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:56.514 [517/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:56.514 [518/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.514 [519/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:56.514 [520/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.774 [521/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:56.774 [522/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:56.774 [523/707] Linking static target lib/librte_eventdev.a 00:01:56.774 [524/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:56.774 [525/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:56.774 [526/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:56.774 [527/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:56.774 [528/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:56.774 [529/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:56.774 [530/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:56.774 [531/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:56.774 [532/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:56.774 [533/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:56.774 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:56.774 [535/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:57.033 [536/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:57.033 [537/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:57.033 [538/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:57.033 [539/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:57.033 [540/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:57.033 [541/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:57.033 [542/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:57.033 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:57.033 [544/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:57.033 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:57.033 [546/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:57.033 [547/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:57.033 [548/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:57.033 [549/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.033 [550/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:57.291 [551/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:57.291 [552/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:57.291 [553/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:57.291 [554/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:57.291 [555/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:57.291 [556/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.291 [557/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:57.291 [558/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:57.291 [559/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:57.291 [560/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:57.291 [561/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:57.550 [562/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:57.550 [563/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:57.550 [564/707] Linking static target lib/librte_acl.a 00:01:57.550 [565/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:57.550 [566/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:57.550 [567/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:57.550 [568/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:57.550 [569/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:57.808 [570/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:57.808 [571/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.808 [572/707] Linking static target lib/librte_ethdev.a 00:01:57.808 [573/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.808 [574/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:58.067 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:58.326 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:58.326 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:58.894 [578/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:58.894 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:59.152 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:59.718 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:59.719 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:59.719 [583/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.719 [584/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:59.977 [585/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:59.977 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:59.977 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:59.977 [588/707] Linking static target drivers/librte_net_i40e.a 00:01:59.977 [589/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:00.913 [590/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:00.913 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.848 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:03.226 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.486 [594/707] Linking target lib/librte_eal.so.24.0 00:02:03.486 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:03.486 [596/707] Linking target lib/librte_timer.so.24.0 00:02:03.486 [597/707] Linking target lib/librte_pci.so.24.0 00:02:03.486 [598/707] Linking target lib/librte_ring.so.24.0 00:02:03.486 [599/707] Linking target lib/librte_meter.so.24.0 00:02:03.486 [600/707] Linking target lib/librte_jobstats.so.24.0 00:02:03.486 [601/707] Linking target lib/librte_cfgfile.so.24.0 00:02:03.486 [602/707] Linking target lib/librte_rawdev.so.24.0 00:02:03.486 [603/707] Linking target lib/librte_dmadev.so.24.0 00:02:03.486 [604/707] Linking target lib/librte_stack.so.24.0 00:02:03.486 [605/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:03.486 [606/707] Linking target lib/librte_acl.so.24.0 00:02:03.745 [607/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:03.745 [608/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:03.745 [609/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:03.745 [610/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:03.745 [611/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:03.745 [612/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:03.745 [613/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:03.745 [614/707] Linking target lib/librte_rcu.so.24.0 00:02:03.745 [615/707] Linking target lib/librte_mempool.so.24.0 00:02:03.745 [616/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:03.745 [617/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:03.745 [618/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:03.745 [619/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:04.004 [620/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:04.004 [621/707] Linking target lib/librte_mbuf.so.24.0 00:02:04.004 [622/707] Linking target lib/librte_rib.so.24.0 00:02:04.004 [623/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:04.004 [624/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:04.004 [625/707] Linking target lib/librte_fib.so.24.0 00:02:04.004 [626/707] Linking target lib/librte_net.so.24.0 00:02:04.004 [627/707] Linking target lib/librte_regexdev.so.24.0 00:02:04.004 [628/707] Linking target lib/librte_bbdev.so.24.0 00:02:04.004 [629/707] Linking target lib/librte_compressdev.so.24.0 00:02:04.004 [630/707] Linking target lib/librte_distributor.so.24.0 00:02:04.004 [631/707] Linking target lib/librte_mldev.so.24.0 00:02:04.004 [632/707] Linking target lib/librte_reorder.so.24.0 00:02:04.004 [633/707] Linking target lib/librte_gpudev.so.24.0 00:02:04.004 [634/707] Linking target lib/librte_sched.so.24.0 00:02:04.004 [635/707] Linking target lib/librte_cryptodev.so.24.0 00:02:04.262 [636/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:04.262 [637/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:04.262 [638/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:04.262 [639/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:04.262 [640/707] Linking target lib/librte_cmdline.so.24.0 00:02:04.262 [641/707] Linking target lib/librte_hash.so.24.0 00:02:04.262 [642/707] Linking target lib/librte_security.so.24.0 00:02:04.522 [643/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:04.522 [644/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:04.522 [645/707] Linking target lib/librte_pdcp.so.24.0 00:02:04.522 [646/707] Linking target lib/librte_member.so.24.0 00:02:04.522 [647/707] Linking target lib/librte_efd.so.24.0 00:02:04.522 [648/707] Linking target lib/librte_lpm.so.24.0 00:02:04.522 [649/707] Linking target lib/librte_ipsec.so.24.0 00:02:04.522 [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:04.522 [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:05.458 [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.458 [653/707] Linking target lib/librte_ethdev.so.24.0 00:02:05.458 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:05.458 [655/707] Linking target lib/librte_metrics.so.24.0 00:02:05.458 [656/707] Linking target lib/librte_gso.so.24.0 00:02:05.458 [657/707] Linking target lib/librte_pcapng.so.24.0 00:02:05.458 [658/707] Linking target lib/librte_gro.so.24.0 00:02:05.458 [659/707] Linking target lib/librte_ip_frag.so.24.0 00:02:05.458 [660/707] Linking target lib/librte_bpf.so.24.0 00:02:05.458 [661/707] Linking target lib/librte_power.so.24.0 00:02:05.716 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:05.716 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:05.716 [664/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:05.716 [665/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:05.716 [666/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:05.716 [667/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:05.716 [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:05.716 [669/707] Linking target lib/librte_pdump.so.24.0 00:02:05.716 [670/707] Linking target lib/librte_graph.so.24.0 00:02:05.716 [671/707] Linking target lib/librte_bitratestats.so.24.0 00:02:05.716 [672/707] Linking target lib/librte_latencystats.so.24.0 00:02:05.716 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:02:05.716 [674/707] Linking target lib/librte_port.so.24.0 00:02:05.975 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:05.975 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:05.975 [677/707] Linking target lib/librte_node.so.24.0 00:02:05.975 [678/707] Linking target lib/librte_table.so.24.0 00:02:06.234 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:07.611 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:07.611 [681/707] Linking static target lib/librte_pipeline.a 00:02:08.177 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:08.177 [683/707] Linking static target lib/librte_vhost.a 00:02:08.745 [684/707] Linking target app/dpdk-test-acl 00:02:08.745 [685/707] Linking target app/dpdk-test-dma-perf 00:02:08.745 [686/707] Linking target app/dpdk-pdump 00:02:08.745 [687/707] Linking target app/dpdk-test-fib 00:02:08.745 [688/707] Linking target app/dpdk-proc-info 00:02:08.745 [689/707] Linking target app/dpdk-dumpcap 00:02:08.745 [690/707] Linking target app/dpdk-test-cmdline 00:02:08.745 [691/707] Linking target app/dpdk-test-sad 00:02:08.745 [692/707] Linking target app/dpdk-test-gpudev 00:02:08.745 [693/707] Linking target app/dpdk-test-regex 00:02:08.745 [694/707] Linking target app/dpdk-test-compress-perf 00:02:08.745 [695/707] Linking target app/dpdk-graph 00:02:08.745 [696/707] Linking target app/dpdk-test-mldev 00:02:08.745 [697/707] Linking target app/dpdk-test-bbdev 00:02:08.745 [698/707] Linking target app/dpdk-test-eventdev 00:02:08.745 [699/707] Linking target app/dpdk-test-pipeline 00:02:08.745 [700/707] Linking target app/dpdk-test-flow-perf 00:02:08.745 [701/707] Linking target app/dpdk-test-crypto-perf 00:02:08.745 [702/707] Linking target app/dpdk-test-security-perf 00:02:08.745 [703/707] Linking target app/dpdk-testpmd 00:02:10.123 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.123 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:12.658 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.658 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:12.658 16:15:42 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:12.658 16:15:42 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:12.658 16:15:42 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:12.658 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:12.658 [0/1] Installing files. 00:02:12.921 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:12.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:12.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:12.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:12.926 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.926 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.926 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.926 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.926 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.926 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.926 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.926 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.926 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:12.927 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.190 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.191 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.191 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.191 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:13.191 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.191 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:13.191 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.191 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:13.191 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.191 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:13.191 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.193 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:13.194 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.195 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:13.195 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:13.195 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:13.195 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:13.195 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:13.195 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:13.195 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:13.195 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:13.195 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:13.195 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:13.195 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:13.195 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:13.195 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:13.195 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:13.195 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:13.195 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:13.195 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:13.195 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:13.195 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:13.195 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:13.195 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:13.195 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:13.195 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:13.195 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:13.195 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:13.195 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:13.195 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:13.195 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:13.195 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:13.195 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:13.195 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:13.195 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:13.195 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:13.195 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:13.195 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:13.195 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:13.195 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:13.195 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:13.195 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:13.195 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:13.195 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:13.195 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:13.195 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:13.195 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:13.195 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:13.195 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:13.195 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:13.195 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:13.195 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:13.195 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:13.195 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:13.195 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:13.195 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:13.195 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:13.195 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:13.195 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:13.195 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:13.195 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:13.195 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:13.195 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:13.195 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:13.195 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:13.195 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:13.195 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:13.195 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:13.195 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:13.195 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:13.195 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:13.195 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:13.195 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:13.195 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:13.195 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:13.195 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:13.195 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:13.195 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:13.195 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:13.195 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:13.195 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:13.195 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:13.195 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:13.196 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:13.196 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:13.196 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:13.196 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:13.196 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:13.196 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:13.196 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:13.196 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:13.196 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:13.196 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:13.196 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:13.196 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:13.196 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:13.196 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:13.196 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:13.196 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:13.196 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:13.196 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:13.196 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:13.196 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:13.196 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:13.196 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:13.196 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:13.196 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:13.196 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:13.196 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:13.196 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:13.196 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:13.196 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:13.196 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:13.196 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:13.196 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:13.196 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:13.196 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:13.196 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:13.196 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:13.196 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:13.196 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:13.196 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:13.196 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:13.196 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:13.196 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:13.196 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:13.196 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:13.196 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:13.196 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:13.196 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:13.196 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:13.196 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:13.196 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:13.196 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:13.196 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:13.196 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:13.196 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:13.196 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:13.196 16:15:43 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:13.196 16:15:43 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.196 00:02:13.196 real 0m28.667s 00:02:13.196 user 9m21.548s 00:02:13.196 sys 2m7.249s 00:02:13.196 16:15:43 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:13.196 16:15:43 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:13.196 ************************************ 00:02:13.196 END TEST build_native_dpdk 00:02:13.196 ************************************ 00:02:13.196 16:15:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:13.196 16:15:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:13.196 16:15:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:13.196 16:15:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:13.196 16:15:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:13.196 16:15:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:13.196 16:15:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:13.196 16:15:43 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:13.456 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:13.456 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:13.456 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:13.714 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:13.972 Using 'verbs' RDMA provider 00:02:27.122 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:39.464 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:39.464 Creating mk/config.mk...done. 00:02:39.464 Creating mk/cc.flags.mk...done. 00:02:39.464 Type 'make' to build. 00:02:39.464 16:16:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:39.464 16:16:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:39.464 16:16:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:39.464 16:16:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.464 ************************************ 00:02:39.464 START TEST make 00:02:39.464 ************************************ 00:02:39.464 16:16:09 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:40.848 The Meson build system 00:02:40.849 Version: 1.5.0 00:02:40.849 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:40.849 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:40.849 Build type: native build 00:02:40.849 Project name: libvfio-user 00:02:40.849 Project version: 0.0.1 00:02:40.849 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:40.849 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:40.849 Host machine cpu family: x86_64 00:02:40.849 Host machine cpu: x86_64 00:02:40.849 Run-time dependency threads found: YES 00:02:40.849 Library dl found: YES 00:02:40.849 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:40.849 Run-time dependency json-c found: YES 0.17 00:02:40.849 Run-time dependency cmocka found: YES 1.1.7 00:02:40.849 Program pytest-3 found: NO 00:02:40.849 Program flake8 found: NO 00:02:40.849 Program misspell-fixer found: NO 00:02:40.849 Program restructuredtext-lint found: NO 00:02:40.849 Program valgrind found: YES (/usr/bin/valgrind) 00:02:40.849 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.849 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.849 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.849 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.849 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:40.849 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:40.849 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.849 Build targets in project: 8 00:02:40.849 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:40.849 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:40.849 00:02:40.849 libvfio-user 0.0.1 00:02:40.849 00:02:40.849 User defined options 00:02:40.849 buildtype : debug 00:02:40.849 default_library: shared 00:02:40.849 libdir : /usr/local/lib 00:02:40.849 00:02:40.849 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.784 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:41.784 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:41.784 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:41.784 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:41.784 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:41.784 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:41.784 [6/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:41.784 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:41.784 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:41.784 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:41.784 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:41.784 [11/37] Compiling C object samples/null.p/null.c.o 00:02:41.784 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:41.784 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:41.784 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:41.784 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:41.784 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:41.784 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:41.784 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:41.784 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:41.784 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:41.784 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:41.784 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:41.784 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:41.784 [24/37] Compiling C object samples/server.p/server.c.o 00:02:41.784 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:41.784 [26/37] Compiling C object samples/client.p/client.c.o 00:02:41.784 [27/37] Linking target samples/client 00:02:42.043 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:42.043 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:42.043 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:42.043 [31/37] Linking target test/unit_tests 00:02:42.043 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:42.043 [33/37] Linking target samples/gpio-pci-idio-16 00:02:42.043 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:42.043 [35/37] Linking target samples/null 00:02:42.043 [36/37] Linking target samples/server 00:02:42.043 [37/37] Linking target samples/lspci 00:02:42.043 INFO: autodetecting backend as ninja 00:02:42.043 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:42.303 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:42.561 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:42.561 ninja: no work to do. 00:03:09.111 CC lib/log/log.o 00:03:09.111 CC lib/ut/ut.o 00:03:09.111 CC lib/log/log_flags.o 00:03:09.111 CC lib/log/log_deprecated.o 00:03:09.111 CC lib/ut_mock/mock.o 00:03:09.111 LIB libspdk_ut_mock.a 00:03:09.111 LIB libspdk_ut.a 00:03:09.111 LIB libspdk_log.a 00:03:09.111 SO libspdk_ut_mock.so.6.0 00:03:09.111 SO libspdk_ut.so.2.0 00:03:09.111 SO libspdk_log.so.7.1 00:03:09.111 SYMLINK libspdk_ut_mock.so 00:03:09.111 SYMLINK libspdk_ut.so 00:03:09.111 SYMLINK libspdk_log.so 00:03:09.111 CC lib/ioat/ioat.o 00:03:09.111 CC lib/dma/dma.o 00:03:09.111 CXX lib/trace_parser/trace.o 00:03:09.111 CC lib/util/base64.o 00:03:09.111 CC lib/util/bit_array.o 00:03:09.111 CC lib/util/cpuset.o 00:03:09.111 CC lib/util/crc16.o 00:03:09.111 CC lib/util/crc32.o 00:03:09.111 CC lib/util/crc32c.o 00:03:09.111 CC lib/util/crc64.o 00:03:09.111 CC lib/util/crc32_ieee.o 00:03:09.111 CC lib/util/dif.o 00:03:09.111 CC lib/util/fd.o 00:03:09.111 CC lib/util/fd_group.o 00:03:09.111 CC lib/util/file.o 00:03:09.370 CC lib/util/hexlify.o 00:03:09.370 CC lib/util/iov.o 00:03:09.370 CC lib/util/math.o 00:03:09.370 CC lib/util/net.o 00:03:09.370 CC lib/util/pipe.o 00:03:09.370 CC lib/util/strerror_tls.o 00:03:09.370 CC lib/util/string.o 00:03:09.370 CC lib/util/uuid.o 00:03:09.370 CC lib/util/xor.o 00:03:09.370 CC lib/util/zipf.o 00:03:09.370 CC lib/util/md5.o 00:03:09.370 CC lib/vfio_user/host/vfio_user.o 00:03:09.370 CC lib/vfio_user/host/vfio_user_pci.o 00:03:09.370 LIB libspdk_dma.a 00:03:09.370 SO libspdk_dma.so.5.0 00:03:09.628 LIB libspdk_ioat.a 00:03:09.628 SYMLINK libspdk_dma.so 00:03:09.628 SO libspdk_ioat.so.7.0 00:03:09.628 SYMLINK libspdk_ioat.so 00:03:09.628 LIB libspdk_vfio_user.a 00:03:09.628 SO libspdk_vfio_user.so.5.0 00:03:09.628 LIB libspdk_util.a 00:03:09.628 SYMLINK libspdk_vfio_user.so 00:03:09.886 SO libspdk_util.so.10.1 00:03:09.886 SYMLINK libspdk_util.so 00:03:09.886 LIB libspdk_trace_parser.a 00:03:09.886 SO libspdk_trace_parser.so.6.0 00:03:10.147 SYMLINK libspdk_trace_parser.so 00:03:10.147 CC lib/json/json_parse.o 00:03:10.147 CC lib/json/json_util.o 00:03:10.147 CC lib/json/json_write.o 00:03:10.147 CC lib/vmd/vmd.o 00:03:10.147 CC lib/conf/conf.o 00:03:10.147 CC lib/vmd/led.o 00:03:10.147 CC lib/rdma_utils/rdma_utils.o 00:03:10.147 CC lib/idxd/idxd.o 00:03:10.147 CC lib/idxd/idxd_user.o 00:03:10.147 CC lib/idxd/idxd_kernel.o 00:03:10.147 CC lib/env_dpdk/env.o 00:03:10.147 CC lib/env_dpdk/memory.o 00:03:10.147 CC lib/env_dpdk/pci.o 00:03:10.147 CC lib/env_dpdk/init.o 00:03:10.147 CC lib/env_dpdk/threads.o 00:03:10.404 CC lib/env_dpdk/pci_ioat.o 00:03:10.404 CC lib/env_dpdk/pci_virtio.o 00:03:10.404 CC lib/env_dpdk/pci_vmd.o 00:03:10.404 CC lib/env_dpdk/pci_idxd.o 00:03:10.404 CC lib/env_dpdk/pci_event.o 00:03:10.404 CC lib/env_dpdk/sigbus_handler.o 00:03:10.404 CC lib/env_dpdk/pci_dpdk.o 00:03:10.404 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:10.404 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:10.404 LIB libspdk_conf.a 00:03:10.661 LIB libspdk_json.a 00:03:10.661 LIB libspdk_rdma_utils.a 00:03:10.661 SO libspdk_conf.so.6.0 00:03:10.661 SO libspdk_json.so.6.0 00:03:10.661 SO libspdk_rdma_utils.so.1.0 00:03:10.661 SYMLINK libspdk_conf.so 00:03:10.661 SYMLINK libspdk_json.so 00:03:10.661 SYMLINK libspdk_rdma_utils.so 00:03:10.661 LIB libspdk_idxd.a 00:03:10.661 SO libspdk_idxd.so.12.1 00:03:10.919 LIB libspdk_vmd.a 00:03:10.919 SO libspdk_vmd.so.6.0 00:03:10.919 SYMLINK libspdk_idxd.so 00:03:10.919 SYMLINK libspdk_vmd.so 00:03:10.919 CC lib/jsonrpc/jsonrpc_server.o 00:03:10.919 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:10.919 CC lib/jsonrpc/jsonrpc_client.o 00:03:10.919 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:10.919 CC lib/rdma_provider/common.o 00:03:10.919 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:11.178 LIB libspdk_rdma_provider.a 00:03:11.178 LIB libspdk_jsonrpc.a 00:03:11.178 SO libspdk_rdma_provider.so.7.0 00:03:11.178 SO libspdk_jsonrpc.so.6.0 00:03:11.178 SYMLINK libspdk_rdma_provider.so 00:03:11.178 SYMLINK libspdk_jsonrpc.so 00:03:11.437 LIB libspdk_env_dpdk.a 00:03:11.437 SO libspdk_env_dpdk.so.15.1 00:03:11.437 SYMLINK libspdk_env_dpdk.so 00:03:11.695 CC lib/rpc/rpc.o 00:03:11.955 LIB libspdk_rpc.a 00:03:11.955 SO libspdk_rpc.so.6.0 00:03:11.955 SYMLINK libspdk_rpc.so 00:03:12.214 CC lib/notify/notify.o 00:03:12.214 CC lib/notify/notify_rpc.o 00:03:12.214 CC lib/trace/trace.o 00:03:12.214 CC lib/keyring/keyring.o 00:03:12.214 CC lib/trace/trace_flags.o 00:03:12.214 CC lib/keyring/keyring_rpc.o 00:03:12.214 CC lib/trace/trace_rpc.o 00:03:12.473 LIB libspdk_notify.a 00:03:12.473 SO libspdk_notify.so.6.0 00:03:12.473 LIB libspdk_keyring.a 00:03:12.473 LIB libspdk_trace.a 00:03:12.473 SO libspdk_keyring.so.2.0 00:03:12.473 SYMLINK libspdk_notify.so 00:03:12.473 SO libspdk_trace.so.11.0 00:03:12.473 SYMLINK libspdk_keyring.so 00:03:12.732 SYMLINK libspdk_trace.so 00:03:12.991 CC lib/sock/sock.o 00:03:12.991 CC lib/sock/sock_rpc.o 00:03:12.991 CC lib/thread/thread.o 00:03:12.991 CC lib/thread/iobuf.o 00:03:13.249 LIB libspdk_sock.a 00:03:13.249 SO libspdk_sock.so.10.0 00:03:13.249 SYMLINK libspdk_sock.so 00:03:13.817 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.817 CC lib/nvme/nvme_ctrlr.o 00:03:13.817 CC lib/nvme/nvme_fabric.o 00:03:13.817 CC lib/nvme/nvme_ns_cmd.o 00:03:13.817 CC lib/nvme/nvme_ns.o 00:03:13.817 CC lib/nvme/nvme_pcie_common.o 00:03:13.817 CC lib/nvme/nvme_pcie.o 00:03:13.817 CC lib/nvme/nvme_qpair.o 00:03:13.817 CC lib/nvme/nvme.o 00:03:13.817 CC lib/nvme/nvme_quirks.o 00:03:13.817 CC lib/nvme/nvme_transport.o 00:03:13.817 CC lib/nvme/nvme_discovery.o 00:03:13.817 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:13.817 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:13.817 CC lib/nvme/nvme_tcp.o 00:03:13.817 CC lib/nvme/nvme_opal.o 00:03:13.817 CC lib/nvme/nvme_io_msg.o 00:03:13.817 CC lib/nvme/nvme_poll_group.o 00:03:13.817 CC lib/nvme/nvme_zns.o 00:03:13.817 CC lib/nvme/nvme_stubs.o 00:03:13.817 CC lib/nvme/nvme_auth.o 00:03:13.817 CC lib/nvme/nvme_cuse.o 00:03:13.817 CC lib/nvme/nvme_vfio_user.o 00:03:13.817 CC lib/nvme/nvme_rdma.o 00:03:14.076 LIB libspdk_thread.a 00:03:14.076 SO libspdk_thread.so.11.0 00:03:14.076 SYMLINK libspdk_thread.so 00:03:14.643 CC lib/init/subsystem_rpc.o 00:03:14.643 CC lib/init/json_config.o 00:03:14.643 CC lib/init/subsystem.o 00:03:14.643 CC lib/virtio/virtio_vhost_user.o 00:03:14.643 CC lib/accel/accel.o 00:03:14.643 CC lib/virtio/virtio.o 00:03:14.643 CC lib/init/rpc.o 00:03:14.643 CC lib/accel/accel_rpc.o 00:03:14.643 CC lib/accel/accel_sw.o 00:03:14.643 CC lib/virtio/virtio_vfio_user.o 00:03:14.643 CC lib/blob/blobstore.o 00:03:14.643 CC lib/virtio/virtio_pci.o 00:03:14.643 CC lib/blob/request.o 00:03:14.643 CC lib/blob/zeroes.o 00:03:14.643 CC lib/blob/blob_bs_dev.o 00:03:14.643 CC lib/fsdev/fsdev.o 00:03:14.643 CC lib/fsdev/fsdev_io.o 00:03:14.643 CC lib/fsdev/fsdev_rpc.o 00:03:14.643 CC lib/vfu_tgt/tgt_endpoint.o 00:03:14.643 CC lib/vfu_tgt/tgt_rpc.o 00:03:14.643 LIB libspdk_init.a 00:03:14.901 SO libspdk_init.so.6.0 00:03:14.901 LIB libspdk_virtio.a 00:03:14.901 LIB libspdk_vfu_tgt.a 00:03:14.901 SO libspdk_virtio.so.7.0 00:03:14.901 SYMLINK libspdk_init.so 00:03:14.901 SO libspdk_vfu_tgt.so.3.0 00:03:14.901 SYMLINK libspdk_virtio.so 00:03:14.901 SYMLINK libspdk_vfu_tgt.so 00:03:15.165 LIB libspdk_fsdev.a 00:03:15.165 SO libspdk_fsdev.so.2.0 00:03:15.165 SYMLINK libspdk_fsdev.so 00:03:15.165 CC lib/event/app.o 00:03:15.165 CC lib/event/reactor.o 00:03:15.165 CC lib/event/log_rpc.o 00:03:15.165 CC lib/event/app_rpc.o 00:03:15.165 CC lib/event/scheduler_static.o 00:03:15.424 LIB libspdk_accel.a 00:03:15.424 SO libspdk_accel.so.16.0 00:03:15.424 LIB libspdk_nvme.a 00:03:15.424 SYMLINK libspdk_accel.so 00:03:15.424 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:15.424 LIB libspdk_event.a 00:03:15.424 SO libspdk_nvme.so.15.0 00:03:15.683 SO libspdk_event.so.14.0 00:03:15.683 SYMLINK libspdk_event.so 00:03:15.683 SYMLINK libspdk_nvme.so 00:03:15.683 CC lib/bdev/bdev.o 00:03:15.683 CC lib/bdev/bdev_rpc.o 00:03:15.683 CC lib/bdev/bdev_zone.o 00:03:15.683 CC lib/bdev/part.o 00:03:15.683 CC lib/bdev/scsi_nvme.o 00:03:15.941 LIB libspdk_fuse_dispatcher.a 00:03:15.941 SO libspdk_fuse_dispatcher.so.1.0 00:03:16.200 SYMLINK libspdk_fuse_dispatcher.so 00:03:16.769 LIB libspdk_blob.a 00:03:16.769 SO libspdk_blob.so.12.0 00:03:16.769 SYMLINK libspdk_blob.so 00:03:17.028 CC lib/blobfs/blobfs.o 00:03:17.028 CC lib/lvol/lvol.o 00:03:17.028 CC lib/blobfs/tree.o 00:03:17.595 LIB libspdk_bdev.a 00:03:17.595 SO libspdk_bdev.so.17.0 00:03:17.595 LIB libspdk_blobfs.a 00:03:17.855 SYMLINK libspdk_bdev.so 00:03:17.855 SO libspdk_blobfs.so.11.0 00:03:17.855 LIB libspdk_lvol.a 00:03:17.855 SYMLINK libspdk_blobfs.so 00:03:17.855 SO libspdk_lvol.so.11.0 00:03:17.855 SYMLINK libspdk_lvol.so 00:03:18.114 CC lib/ftl/ftl_core.o 00:03:18.114 CC lib/ftl/ftl_init.o 00:03:18.114 CC lib/ftl/ftl_layout.o 00:03:18.114 CC lib/ftl/ftl_debug.o 00:03:18.114 CC lib/ftl/ftl_io.o 00:03:18.114 CC lib/ftl/ftl_sb.o 00:03:18.114 CC lib/scsi/dev.o 00:03:18.114 CC lib/ublk/ublk.o 00:03:18.114 CC lib/scsi/lun.o 00:03:18.114 CC lib/ftl/ftl_l2p.o 00:03:18.114 CC lib/nbd/nbd.o 00:03:18.114 CC lib/ublk/ublk_rpc.o 00:03:18.114 CC lib/scsi/port.o 00:03:18.114 CC lib/ftl/ftl_l2p_flat.o 00:03:18.114 CC lib/nbd/nbd_rpc.o 00:03:18.114 CC lib/scsi/scsi.o 00:03:18.114 CC lib/ftl/ftl_nv_cache.o 00:03:18.114 CC lib/nvmf/ctrlr.o 00:03:18.114 CC lib/scsi/scsi_bdev.o 00:03:18.114 CC lib/ftl/ftl_band.o 00:03:18.114 CC lib/nvmf/ctrlr_discovery.o 00:03:18.114 CC lib/scsi/scsi_pr.o 00:03:18.114 CC lib/ftl/ftl_band_ops.o 00:03:18.114 CC lib/nvmf/ctrlr_bdev.o 00:03:18.114 CC lib/scsi/scsi_rpc.o 00:03:18.114 CC lib/nvmf/subsystem.o 00:03:18.114 CC lib/ftl/ftl_writer.o 00:03:18.114 CC lib/ftl/ftl_l2p_cache.o 00:03:18.114 CC lib/scsi/task.o 00:03:18.114 CC lib/ftl/ftl_reloc.o 00:03:18.114 CC lib/ftl/ftl_rq.o 00:03:18.114 CC lib/nvmf/nvmf.o 00:03:18.114 CC lib/nvmf/tcp.o 00:03:18.114 CC lib/nvmf/nvmf_rpc.o 00:03:18.114 CC lib/nvmf/transport.o 00:03:18.114 CC lib/ftl/ftl_p2l.o 00:03:18.114 CC lib/nvmf/mdns_server.o 00:03:18.114 CC lib/nvmf/stubs.o 00:03:18.114 CC lib/ftl/ftl_p2l_log.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:18.114 CC lib/nvmf/vfio_user.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:18.114 CC lib/nvmf/auth.o 00:03:18.114 CC lib/nvmf/rdma.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:18.114 CC lib/ftl/utils/ftl_conf.o 00:03:18.114 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:18.114 CC lib/ftl/utils/ftl_md.o 00:03:18.114 CC lib/ftl/utils/ftl_mempool.o 00:03:18.114 CC lib/ftl/utils/ftl_bitmap.o 00:03:18.114 CC lib/ftl/utils/ftl_property.o 00:03:18.114 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:18.114 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:18.114 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:18.114 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:18.114 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:18.114 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:18.114 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:18.114 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:18.114 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:18.114 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:18.114 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:18.114 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:18.114 CC lib/ftl/base/ftl_base_dev.o 00:03:18.114 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:18.114 CC lib/ftl/base/ftl_base_bdev.o 00:03:18.114 CC lib/ftl/ftl_trace.o 00:03:18.682 LIB libspdk_nbd.a 00:03:18.682 LIB libspdk_scsi.a 00:03:18.682 SO libspdk_nbd.so.7.0 00:03:18.682 SO libspdk_scsi.so.9.0 00:03:18.940 SYMLINK libspdk_nbd.so 00:03:18.940 SYMLINK libspdk_scsi.so 00:03:18.940 LIB libspdk_ublk.a 00:03:18.940 SO libspdk_ublk.so.3.0 00:03:18.940 SYMLINK libspdk_ublk.so 00:03:19.198 LIB libspdk_ftl.a 00:03:19.198 CC lib/vhost/vhost.o 00:03:19.198 CC lib/vhost/vhost_rpc.o 00:03:19.198 CC lib/vhost/vhost_scsi.o 00:03:19.198 CC lib/vhost/vhost_blk.o 00:03:19.198 CC lib/vhost/rte_vhost_user.o 00:03:19.198 CC lib/iscsi/conn.o 00:03:19.198 CC lib/iscsi/init_grp.o 00:03:19.198 CC lib/iscsi/iscsi.o 00:03:19.198 CC lib/iscsi/param.o 00:03:19.198 CC lib/iscsi/portal_grp.o 00:03:19.198 CC lib/iscsi/tgt_node.o 00:03:19.198 CC lib/iscsi/iscsi_subsystem.o 00:03:19.198 CC lib/iscsi/iscsi_rpc.o 00:03:19.198 CC lib/iscsi/task.o 00:03:19.456 SO libspdk_ftl.so.9.0 00:03:19.456 SYMLINK libspdk_ftl.so 00:03:20.023 LIB libspdk_nvmf.a 00:03:20.023 SO libspdk_nvmf.so.20.0 00:03:20.023 LIB libspdk_vhost.a 00:03:20.023 SO libspdk_vhost.so.8.0 00:03:20.023 SYMLINK libspdk_nvmf.so 00:03:20.282 SYMLINK libspdk_vhost.so 00:03:20.282 LIB libspdk_iscsi.a 00:03:20.282 SO libspdk_iscsi.so.8.0 00:03:20.541 SYMLINK libspdk_iscsi.so 00:03:21.108 CC module/env_dpdk/env_dpdk_rpc.o 00:03:21.108 CC module/vfu_device/vfu_virtio.o 00:03:21.108 CC module/vfu_device/vfu_virtio_blk.o 00:03:21.108 CC module/vfu_device/vfu_virtio_scsi.o 00:03:21.108 CC module/vfu_device/vfu_virtio_rpc.o 00:03:21.108 CC module/vfu_device/vfu_virtio_fs.o 00:03:21.108 CC module/sock/posix/posix.o 00:03:21.108 LIB libspdk_env_dpdk_rpc.a 00:03:21.108 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:21.108 CC module/accel/iaa/accel_iaa.o 00:03:21.108 CC module/accel/iaa/accel_iaa_rpc.o 00:03:21.108 CC module/keyring/file/keyring.o 00:03:21.108 CC module/accel/error/accel_error.o 00:03:21.108 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:21.108 CC module/keyring/file/keyring_rpc.o 00:03:21.108 CC module/accel/error/accel_error_rpc.o 00:03:21.108 CC module/accel/dsa/accel_dsa.o 00:03:21.108 CC module/scheduler/gscheduler/gscheduler.o 00:03:21.108 CC module/accel/dsa/accel_dsa_rpc.o 00:03:21.108 CC module/keyring/linux/keyring.o 00:03:21.108 CC module/keyring/linux/keyring_rpc.o 00:03:21.108 CC module/accel/ioat/accel_ioat.o 00:03:21.108 CC module/accel/ioat/accel_ioat_rpc.o 00:03:21.108 CC module/blob/bdev/blob_bdev.o 00:03:21.108 SO libspdk_env_dpdk_rpc.so.6.0 00:03:21.108 CC module/fsdev/aio/fsdev_aio.o 00:03:21.108 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:21.108 CC module/fsdev/aio/linux_aio_mgr.o 00:03:21.367 SYMLINK libspdk_env_dpdk_rpc.so 00:03:21.367 LIB libspdk_keyring_file.a 00:03:21.367 LIB libspdk_scheduler_dpdk_governor.a 00:03:21.367 LIB libspdk_keyring_linux.a 00:03:21.367 LIB libspdk_scheduler_gscheduler.a 00:03:21.367 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:21.367 SO libspdk_keyring_file.so.2.0 00:03:21.367 SO libspdk_keyring_linux.so.1.0 00:03:21.367 SO libspdk_scheduler_gscheduler.so.4.0 00:03:21.367 LIB libspdk_scheduler_dynamic.a 00:03:21.367 LIB libspdk_accel_iaa.a 00:03:21.367 LIB libspdk_accel_ioat.a 00:03:21.367 LIB libspdk_accel_error.a 00:03:21.367 SO libspdk_scheduler_dynamic.so.4.0 00:03:21.367 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:21.367 SO libspdk_accel_iaa.so.3.0 00:03:21.367 SO libspdk_accel_ioat.so.6.0 00:03:21.367 SO libspdk_accel_error.so.2.0 00:03:21.367 SYMLINK libspdk_scheduler_gscheduler.so 00:03:21.367 SYMLINK libspdk_keyring_file.so 00:03:21.367 SYMLINK libspdk_keyring_linux.so 00:03:21.367 LIB libspdk_accel_dsa.a 00:03:21.367 LIB libspdk_blob_bdev.a 00:03:21.625 SYMLINK libspdk_scheduler_dynamic.so 00:03:21.625 SYMLINK libspdk_accel_ioat.so 00:03:21.625 SO libspdk_accel_dsa.so.5.0 00:03:21.625 SYMLINK libspdk_accel_iaa.so 00:03:21.625 SO libspdk_blob_bdev.so.12.0 00:03:21.626 SYMLINK libspdk_accel_error.so 00:03:21.626 LIB libspdk_vfu_device.a 00:03:21.626 SYMLINK libspdk_accel_dsa.so 00:03:21.626 SO libspdk_vfu_device.so.3.0 00:03:21.626 SYMLINK libspdk_blob_bdev.so 00:03:21.626 SYMLINK libspdk_vfu_device.so 00:03:21.885 LIB libspdk_fsdev_aio.a 00:03:21.885 LIB libspdk_sock_posix.a 00:03:21.885 SO libspdk_fsdev_aio.so.1.0 00:03:21.885 SO libspdk_sock_posix.so.6.0 00:03:21.885 SYMLINK libspdk_fsdev_aio.so 00:03:21.885 SYMLINK libspdk_sock_posix.so 00:03:22.143 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:22.143 CC module/bdev/raid/bdev_raid_rpc.o 00:03:22.143 CC module/bdev/delay/vbdev_delay.o 00:03:22.143 CC module/bdev/raid/bdev_raid.o 00:03:22.143 CC module/bdev/raid/bdev_raid_sb.o 00:03:22.143 CC module/bdev/raid/raid0.o 00:03:22.143 CC module/bdev/raid/raid1.o 00:03:22.143 CC module/bdev/raid/concat.o 00:03:22.143 CC module/bdev/error/vbdev_error.o 00:03:22.143 CC module/bdev/error/vbdev_error_rpc.o 00:03:22.143 CC module/bdev/gpt/gpt.o 00:03:22.143 CC module/bdev/null/bdev_null.o 00:03:22.143 CC module/bdev/null/bdev_null_rpc.o 00:03:22.143 CC module/bdev/gpt/vbdev_gpt.o 00:03:22.143 CC module/bdev/nvme/bdev_nvme.o 00:03:22.143 CC module/bdev/malloc/bdev_malloc.o 00:03:22.143 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:22.143 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:22.143 CC module/bdev/nvme/nvme_rpc.o 00:03:22.143 CC module/bdev/nvme/bdev_mdns_client.o 00:03:22.143 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:22.143 CC module/bdev/nvme/vbdev_opal.o 00:03:22.143 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:22.143 CC module/bdev/passthru/vbdev_passthru.o 00:03:22.143 CC module/bdev/lvol/vbdev_lvol.o 00:03:22.143 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:22.143 CC module/blobfs/bdev/blobfs_bdev.o 00:03:22.143 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:22.143 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:22.143 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:22.143 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:22.143 CC module/bdev/split/vbdev_split.o 00:03:22.143 CC module/bdev/split/vbdev_split_rpc.o 00:03:22.143 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:22.143 CC module/bdev/aio/bdev_aio.o 00:03:22.143 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:22.143 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:22.143 CC module/bdev/aio/bdev_aio_rpc.o 00:03:22.143 CC module/bdev/ftl/bdev_ftl.o 00:03:22.143 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:22.143 CC module/bdev/iscsi/bdev_iscsi.o 00:03:22.143 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:22.401 LIB libspdk_blobfs_bdev.a 00:03:22.401 SO libspdk_blobfs_bdev.so.6.0 00:03:22.401 LIB libspdk_bdev_null.a 00:03:22.401 LIB libspdk_bdev_split.a 00:03:22.401 LIB libspdk_bdev_error.a 00:03:22.401 SO libspdk_bdev_null.so.6.0 00:03:22.401 SYMLINK libspdk_blobfs_bdev.so 00:03:22.401 LIB libspdk_bdev_gpt.a 00:03:22.401 SO libspdk_bdev_split.so.6.0 00:03:22.401 LIB libspdk_bdev_passthru.a 00:03:22.401 LIB libspdk_bdev_ftl.a 00:03:22.401 SO libspdk_bdev_error.so.6.0 00:03:22.401 SO libspdk_bdev_gpt.so.6.0 00:03:22.401 SO libspdk_bdev_ftl.so.6.0 00:03:22.401 LIB libspdk_bdev_delay.a 00:03:22.401 SO libspdk_bdev_passthru.so.6.0 00:03:22.401 SYMLINK libspdk_bdev_null.so 00:03:22.401 LIB libspdk_bdev_zone_block.a 00:03:22.401 LIB libspdk_bdev_aio.a 00:03:22.660 SYMLINK libspdk_bdev_split.so 00:03:22.660 LIB libspdk_bdev_iscsi.a 00:03:22.660 SO libspdk_bdev_delay.so.6.0 00:03:22.660 SYMLINK libspdk_bdev_error.so 00:03:22.660 LIB libspdk_bdev_malloc.a 00:03:22.660 SO libspdk_bdev_zone_block.so.6.0 00:03:22.660 SO libspdk_bdev_aio.so.6.0 00:03:22.660 SYMLINK libspdk_bdev_passthru.so 00:03:22.660 SYMLINK libspdk_bdev_gpt.so 00:03:22.660 SYMLINK libspdk_bdev_ftl.so 00:03:22.660 SO libspdk_bdev_iscsi.so.6.0 00:03:22.660 SO libspdk_bdev_malloc.so.6.0 00:03:22.660 SYMLINK libspdk_bdev_delay.so 00:03:22.660 SYMLINK libspdk_bdev_aio.so 00:03:22.660 SYMLINK libspdk_bdev_zone_block.so 00:03:22.660 SYMLINK libspdk_bdev_iscsi.so 00:03:22.660 LIB libspdk_bdev_virtio.a 00:03:22.660 SYMLINK libspdk_bdev_malloc.so 00:03:22.660 LIB libspdk_bdev_lvol.a 00:03:22.660 SO libspdk_bdev_virtio.so.6.0 00:03:22.660 SO libspdk_bdev_lvol.so.6.0 00:03:22.660 SYMLINK libspdk_bdev_virtio.so 00:03:22.660 SYMLINK libspdk_bdev_lvol.so 00:03:22.923 LIB libspdk_bdev_raid.a 00:03:22.923 SO libspdk_bdev_raid.so.6.0 00:03:23.181 SYMLINK libspdk_bdev_raid.so 00:03:24.118 LIB libspdk_bdev_nvme.a 00:03:24.118 SO libspdk_bdev_nvme.so.7.1 00:03:24.118 SYMLINK libspdk_bdev_nvme.so 00:03:25.055 CC module/event/subsystems/sock/sock.o 00:03:25.055 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:25.055 CC module/event/subsystems/vmd/vmd.o 00:03:25.055 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:25.055 CC module/event/subsystems/iobuf/iobuf.o 00:03:25.055 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:25.055 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:25.055 CC module/event/subsystems/fsdev/fsdev.o 00:03:25.055 CC module/event/subsystems/scheduler/scheduler.o 00:03:25.055 CC module/event/subsystems/keyring/keyring.o 00:03:25.055 LIB libspdk_event_keyring.a 00:03:25.055 LIB libspdk_event_vfu_tgt.a 00:03:25.055 LIB libspdk_event_vmd.a 00:03:25.055 LIB libspdk_event_sock.a 00:03:25.055 LIB libspdk_event_vhost_blk.a 00:03:25.055 LIB libspdk_event_scheduler.a 00:03:25.055 LIB libspdk_event_fsdev.a 00:03:25.055 LIB libspdk_event_iobuf.a 00:03:25.055 SO libspdk_event_keyring.so.1.0 00:03:25.055 SO libspdk_event_scheduler.so.4.0 00:03:25.055 SO libspdk_event_vfu_tgt.so.3.0 00:03:25.055 SO libspdk_event_vhost_blk.so.3.0 00:03:25.055 SO libspdk_event_fsdev.so.1.0 00:03:25.055 SO libspdk_event_sock.so.5.0 00:03:25.055 SO libspdk_event_vmd.so.6.0 00:03:25.055 SO libspdk_event_iobuf.so.3.0 00:03:25.055 SYMLINK libspdk_event_keyring.so 00:03:25.055 SYMLINK libspdk_event_scheduler.so 00:03:25.055 SYMLINK libspdk_event_fsdev.so 00:03:25.055 SYMLINK libspdk_event_vfu_tgt.so 00:03:25.055 SYMLINK libspdk_event_vhost_blk.so 00:03:25.055 SYMLINK libspdk_event_sock.so 00:03:25.055 SYMLINK libspdk_event_vmd.so 00:03:25.055 SYMLINK libspdk_event_iobuf.so 00:03:25.627 CC module/event/subsystems/accel/accel.o 00:03:25.627 LIB libspdk_event_accel.a 00:03:25.627 SO libspdk_event_accel.so.6.0 00:03:25.627 SYMLINK libspdk_event_accel.so 00:03:26.195 CC module/event/subsystems/bdev/bdev.o 00:03:26.195 LIB libspdk_event_bdev.a 00:03:26.195 SO libspdk_event_bdev.so.6.0 00:03:26.195 SYMLINK libspdk_event_bdev.so 00:03:26.763 CC module/event/subsystems/nbd/nbd.o 00:03:26.763 CC module/event/subsystems/scsi/scsi.o 00:03:26.763 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:26.763 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:26.763 CC module/event/subsystems/ublk/ublk.o 00:03:26.763 LIB libspdk_event_nbd.a 00:03:26.763 LIB libspdk_event_ublk.a 00:03:26.763 LIB libspdk_event_scsi.a 00:03:26.763 SO libspdk_event_nbd.so.6.0 00:03:26.763 SO libspdk_event_ublk.so.3.0 00:03:26.763 SO libspdk_event_scsi.so.6.0 00:03:27.022 LIB libspdk_event_nvmf.a 00:03:27.022 SYMLINK libspdk_event_nbd.so 00:03:27.022 SYMLINK libspdk_event_ublk.so 00:03:27.022 SYMLINK libspdk_event_scsi.so 00:03:27.022 SO libspdk_event_nvmf.so.6.0 00:03:27.022 SYMLINK libspdk_event_nvmf.so 00:03:27.281 CC module/event/subsystems/iscsi/iscsi.o 00:03:27.281 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:27.541 LIB libspdk_event_iscsi.a 00:03:27.541 LIB libspdk_event_vhost_scsi.a 00:03:27.541 SO libspdk_event_iscsi.so.6.0 00:03:27.541 SO libspdk_event_vhost_scsi.so.3.0 00:03:27.541 SYMLINK libspdk_event_iscsi.so 00:03:27.541 SYMLINK libspdk_event_vhost_scsi.so 00:03:27.800 SO libspdk.so.6.0 00:03:27.800 SYMLINK libspdk.so 00:03:28.058 CC test/rpc_client/rpc_client_test.o 00:03:28.058 CC app/trace_record/trace_record.o 00:03:28.058 CXX app/trace/trace.o 00:03:28.058 CC app/spdk_lspci/spdk_lspci.o 00:03:28.058 CC app/spdk_nvme_perf/perf.o 00:03:28.058 CC app/spdk_top/spdk_top.o 00:03:28.058 CC app/spdk_nvme_identify/identify.o 00:03:28.058 CC app/spdk_nvme_discover/discovery_aer.o 00:03:28.058 TEST_HEADER include/spdk/accel.h 00:03:28.058 TEST_HEADER include/spdk/assert.h 00:03:28.058 TEST_HEADER include/spdk/accel_module.h 00:03:28.058 TEST_HEADER include/spdk/base64.h 00:03:28.058 TEST_HEADER include/spdk/barrier.h 00:03:28.058 TEST_HEADER include/spdk/bdev.h 00:03:28.058 TEST_HEADER include/spdk/bdev_zone.h 00:03:28.058 TEST_HEADER include/spdk/bdev_module.h 00:03:28.058 TEST_HEADER include/spdk/bit_pool.h 00:03:28.058 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:28.058 TEST_HEADER include/spdk/blob_bdev.h 00:03:28.058 TEST_HEADER include/spdk/bit_array.h 00:03:28.058 TEST_HEADER include/spdk/blobfs.h 00:03:28.058 TEST_HEADER include/spdk/blob.h 00:03:28.058 TEST_HEADER include/spdk/config.h 00:03:28.058 TEST_HEADER include/spdk/conf.h 00:03:28.058 TEST_HEADER include/spdk/cpuset.h 00:03:28.058 TEST_HEADER include/spdk/crc16.h 00:03:28.058 CC app/spdk_dd/spdk_dd.o 00:03:28.058 TEST_HEADER include/spdk/crc32.h 00:03:28.058 TEST_HEADER include/spdk/dif.h 00:03:28.058 TEST_HEADER include/spdk/crc64.h 00:03:28.058 TEST_HEADER include/spdk/endian.h 00:03:28.058 TEST_HEADER include/spdk/dma.h 00:03:28.058 TEST_HEADER include/spdk/event.h 00:03:28.058 TEST_HEADER include/spdk/env_dpdk.h 00:03:28.058 TEST_HEADER include/spdk/env.h 00:03:28.058 TEST_HEADER include/spdk/fd.h 00:03:28.058 TEST_HEADER include/spdk/fd_group.h 00:03:28.058 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:28.058 TEST_HEADER include/spdk/fsdev.h 00:03:28.058 TEST_HEADER include/spdk/fsdev_module.h 00:03:28.058 CC app/nvmf_tgt/nvmf_main.o 00:03:28.058 TEST_HEADER include/spdk/file.h 00:03:28.058 TEST_HEADER include/spdk/gpt_spec.h 00:03:28.058 TEST_HEADER include/spdk/ftl.h 00:03:28.058 TEST_HEADER include/spdk/histogram_data.h 00:03:28.058 CC app/iscsi_tgt/iscsi_tgt.o 00:03:28.058 TEST_HEADER include/spdk/hexlify.h 00:03:28.058 TEST_HEADER include/spdk/idxd.h 00:03:28.058 TEST_HEADER include/spdk/init.h 00:03:28.058 TEST_HEADER include/spdk/ioat.h 00:03:28.058 TEST_HEADER include/spdk/idxd_spec.h 00:03:28.058 TEST_HEADER include/spdk/ioat_spec.h 00:03:28.058 TEST_HEADER include/spdk/json.h 00:03:28.058 TEST_HEADER include/spdk/iscsi_spec.h 00:03:28.058 TEST_HEADER include/spdk/jsonrpc.h 00:03:28.058 TEST_HEADER include/spdk/keyring.h 00:03:28.058 TEST_HEADER include/spdk/keyring_module.h 00:03:28.058 TEST_HEADER include/spdk/log.h 00:03:28.058 TEST_HEADER include/spdk/likely.h 00:03:28.058 TEST_HEADER include/spdk/md5.h 00:03:28.058 TEST_HEADER include/spdk/lvol.h 00:03:28.058 TEST_HEADER include/spdk/memory.h 00:03:28.058 TEST_HEADER include/spdk/net.h 00:03:28.058 TEST_HEADER include/spdk/nbd.h 00:03:28.058 TEST_HEADER include/spdk/notify.h 00:03:28.058 TEST_HEADER include/spdk/mmio.h 00:03:28.058 TEST_HEADER include/spdk/nvme.h 00:03:28.058 TEST_HEADER include/spdk/nvme_intel.h 00:03:28.058 CC app/spdk_tgt/spdk_tgt.o 00:03:28.058 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:28.058 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:28.058 TEST_HEADER include/spdk/nvme_spec.h 00:03:28.058 TEST_HEADER include/spdk/nvme_zns.h 00:03:28.058 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:28.058 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:28.058 TEST_HEADER include/spdk/nvmf_spec.h 00:03:28.058 TEST_HEADER include/spdk/nvmf.h 00:03:28.319 TEST_HEADER include/spdk/opal.h 00:03:28.319 TEST_HEADER include/spdk/pci_ids.h 00:03:28.319 TEST_HEADER include/spdk/nvmf_transport.h 00:03:28.319 TEST_HEADER include/spdk/opal_spec.h 00:03:28.319 TEST_HEADER include/spdk/pipe.h 00:03:28.319 TEST_HEADER include/spdk/rpc.h 00:03:28.319 TEST_HEADER include/spdk/scheduler.h 00:03:28.319 TEST_HEADER include/spdk/queue.h 00:03:28.319 TEST_HEADER include/spdk/scsi.h 00:03:28.319 TEST_HEADER include/spdk/reduce.h 00:03:28.319 TEST_HEADER include/spdk/sock.h 00:03:28.319 TEST_HEADER include/spdk/scsi_spec.h 00:03:28.319 TEST_HEADER include/spdk/string.h 00:03:28.319 TEST_HEADER include/spdk/stdinc.h 00:03:28.319 TEST_HEADER include/spdk/thread.h 00:03:28.319 TEST_HEADER include/spdk/trace.h 00:03:28.319 TEST_HEADER include/spdk/trace_parser.h 00:03:28.319 TEST_HEADER include/spdk/tree.h 00:03:28.320 TEST_HEADER include/spdk/ublk.h 00:03:28.320 TEST_HEADER include/spdk/util.h 00:03:28.320 TEST_HEADER include/spdk/uuid.h 00:03:28.320 TEST_HEADER include/spdk/version.h 00:03:28.320 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:28.320 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:28.320 TEST_HEADER include/spdk/xor.h 00:03:28.320 TEST_HEADER include/spdk/vmd.h 00:03:28.320 TEST_HEADER include/spdk/vhost.h 00:03:28.320 TEST_HEADER include/spdk/zipf.h 00:03:28.320 CXX test/cpp_headers/accel.o 00:03:28.320 CXX test/cpp_headers/accel_module.o 00:03:28.320 CXX test/cpp_headers/assert.o 00:03:28.320 CXX test/cpp_headers/barrier.o 00:03:28.320 CXX test/cpp_headers/bdev.o 00:03:28.320 CXX test/cpp_headers/base64.o 00:03:28.320 CXX test/cpp_headers/bdev_module.o 00:03:28.320 CXX test/cpp_headers/bit_pool.o 00:03:28.320 CXX test/cpp_headers/bdev_zone.o 00:03:28.320 CXX test/cpp_headers/bit_array.o 00:03:28.320 CXX test/cpp_headers/blobfs.o 00:03:28.320 CXX test/cpp_headers/blob_bdev.o 00:03:28.320 CXX test/cpp_headers/blobfs_bdev.o 00:03:28.320 CXX test/cpp_headers/conf.o 00:03:28.320 CXX test/cpp_headers/blob.o 00:03:28.320 CXX test/cpp_headers/config.o 00:03:28.320 CXX test/cpp_headers/cpuset.o 00:03:28.320 CXX test/cpp_headers/crc16.o 00:03:28.320 CXX test/cpp_headers/crc32.o 00:03:28.320 CXX test/cpp_headers/dma.o 00:03:28.320 CXX test/cpp_headers/dif.o 00:03:28.320 CXX test/cpp_headers/crc64.o 00:03:28.320 CXX test/cpp_headers/env_dpdk.o 00:03:28.320 CXX test/cpp_headers/env.o 00:03:28.320 CXX test/cpp_headers/fd_group.o 00:03:28.320 CXX test/cpp_headers/endian.o 00:03:28.320 CXX test/cpp_headers/fd.o 00:03:28.320 CXX test/cpp_headers/event.o 00:03:28.320 CXX test/cpp_headers/file.o 00:03:28.320 CXX test/cpp_headers/fsdev.o 00:03:28.320 CXX test/cpp_headers/fsdev_module.o 00:03:28.320 CXX test/cpp_headers/ftl.o 00:03:28.320 CXX test/cpp_headers/hexlify.o 00:03:28.320 CXX test/cpp_headers/gpt_spec.o 00:03:28.320 CXX test/cpp_headers/histogram_data.o 00:03:28.320 CXX test/cpp_headers/idxd.o 00:03:28.320 CXX test/cpp_headers/idxd_spec.o 00:03:28.320 CXX test/cpp_headers/init.o 00:03:28.320 CXX test/cpp_headers/ioat_spec.o 00:03:28.320 CXX test/cpp_headers/ioat.o 00:03:28.320 CXX test/cpp_headers/iscsi_spec.o 00:03:28.320 CXX test/cpp_headers/json.o 00:03:28.320 CXX test/cpp_headers/jsonrpc.o 00:03:28.320 CXX test/cpp_headers/keyring.o 00:03:28.320 CXX test/cpp_headers/likely.o 00:03:28.320 CXX test/cpp_headers/log.o 00:03:28.320 CXX test/cpp_headers/keyring_module.o 00:03:28.320 CXX test/cpp_headers/lvol.o 00:03:28.320 CXX test/cpp_headers/md5.o 00:03:28.320 CXX test/cpp_headers/memory.o 00:03:28.320 CXX test/cpp_headers/nbd.o 00:03:28.320 CXX test/cpp_headers/mmio.o 00:03:28.320 CXX test/cpp_headers/notify.o 00:03:28.320 CXX test/cpp_headers/nvme.o 00:03:28.320 CXX test/cpp_headers/net.o 00:03:28.320 CXX test/cpp_headers/nvme_intel.o 00:03:28.320 CXX test/cpp_headers/nvme_ocssd.o 00:03:28.320 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:28.320 CXX test/cpp_headers/nvme_spec.o 00:03:28.320 CXX test/cpp_headers/nvmf_cmd.o 00:03:28.320 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:28.320 CXX test/cpp_headers/nvme_zns.o 00:03:28.320 CXX test/cpp_headers/nvmf_spec.o 00:03:28.320 CXX test/cpp_headers/nvmf.o 00:03:28.320 CXX test/cpp_headers/opal.o 00:03:28.320 CXX test/cpp_headers/nvmf_transport.o 00:03:28.320 CC examples/util/zipf/zipf.o 00:03:28.320 CXX test/cpp_headers/opal_spec.o 00:03:28.320 CC examples/ioat/verify/verify.o 00:03:28.320 CC test/thread/poller_perf/poller_perf.o 00:03:28.320 CXX test/cpp_headers/pci_ids.o 00:03:28.320 CC test/app/histogram_perf/histogram_perf.o 00:03:28.320 CC examples/ioat/perf/perf.o 00:03:28.320 CC test/app/stub/stub.o 00:03:28.320 CC test/env/pci/pci_ut.o 00:03:28.320 CC test/app/jsoncat/jsoncat.o 00:03:28.320 CC test/env/vtophys/vtophys.o 00:03:28.320 CC test/env/memory/memory_ut.o 00:03:28.320 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:28.320 CC app/fio/nvme/fio_plugin.o 00:03:28.320 CC test/app/bdev_svc/bdev_svc.o 00:03:28.320 CC app/fio/bdev/fio_plugin.o 00:03:28.320 CC test/dma/test_dma/test_dma.o 00:03:28.589 LINK spdk_lspci 00:03:28.589 LINK nvmf_tgt 00:03:28.589 LINK spdk_nvme_discover 00:03:28.853 LINK rpc_client_test 00:03:28.853 LINK spdk_trace_record 00:03:28.853 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:28.853 LINK spdk_tgt 00:03:28.853 LINK interrupt_tgt 00:03:28.853 CC test/env/mem_callbacks/mem_callbacks.o 00:03:28.853 LINK poller_perf 00:03:28.853 LINK jsoncat 00:03:28.853 LINK iscsi_tgt 00:03:28.853 CXX test/cpp_headers/pipe.o 00:03:28.853 CXX test/cpp_headers/queue.o 00:03:28.853 LINK vtophys 00:03:28.853 CXX test/cpp_headers/reduce.o 00:03:28.853 CXX test/cpp_headers/rpc.o 00:03:28.853 CXX test/cpp_headers/scheduler.o 00:03:28.853 CXX test/cpp_headers/scsi.o 00:03:28.853 LINK stub 00:03:28.853 CXX test/cpp_headers/scsi_spec.o 00:03:28.853 CXX test/cpp_headers/sock.o 00:03:28.853 CXX test/cpp_headers/stdinc.o 00:03:28.853 CXX test/cpp_headers/string.o 00:03:28.853 CXX test/cpp_headers/thread.o 00:03:28.853 CXX test/cpp_headers/trace.o 00:03:28.853 CXX test/cpp_headers/trace_parser.o 00:03:28.853 CXX test/cpp_headers/tree.o 00:03:28.853 LINK zipf 00:03:28.853 CXX test/cpp_headers/ublk.o 00:03:29.113 LINK histogram_perf 00:03:29.113 CXX test/cpp_headers/util.o 00:03:29.113 CXX test/cpp_headers/uuid.o 00:03:29.113 CXX test/cpp_headers/version.o 00:03:29.113 CXX test/cpp_headers/vfio_user_pci.o 00:03:29.113 CXX test/cpp_headers/vfio_user_spec.o 00:03:29.113 CXX test/cpp_headers/vhost.o 00:03:29.113 CXX test/cpp_headers/vmd.o 00:03:29.113 CXX test/cpp_headers/xor.o 00:03:29.113 CXX test/cpp_headers/zipf.o 00:03:29.113 LINK env_dpdk_post_init 00:03:29.113 LINK ioat_perf 00:03:29.113 LINK spdk_dd 00:03:29.113 LINK bdev_svc 00:03:29.113 LINK verify 00:03:29.113 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:29.113 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:29.113 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:29.113 LINK spdk_trace 00:03:29.113 LINK pci_ut 00:03:29.371 LINK nvme_fuzz 00:03:29.371 LINK spdk_nvme 00:03:29.371 LINK spdk_bdev 00:03:29.371 LINK test_dma 00:03:29.630 LINK spdk_nvme_perf 00:03:29.630 LINK spdk_nvme_identify 00:03:29.630 LINK spdk_top 00:03:29.630 LINK vhost_fuzz 00:03:29.630 CC test/event/event_perf/event_perf.o 00:03:29.630 CC test/event/reactor/reactor.o 00:03:29.630 CC test/event/reactor_perf/reactor_perf.o 00:03:29.630 CC test/event/app_repeat/app_repeat.o 00:03:29.630 CC test/event/scheduler/scheduler.o 00:03:29.630 CC examples/vmd/lsvmd/lsvmd.o 00:03:29.630 CC examples/sock/hello_world/hello_sock.o 00:03:29.630 LINK mem_callbacks 00:03:29.630 CC examples/idxd/perf/perf.o 00:03:29.630 CC examples/vmd/led/led.o 00:03:29.630 CC app/vhost/vhost.o 00:03:29.630 CC examples/thread/thread/thread_ex.o 00:03:29.630 LINK reactor 00:03:29.630 LINK reactor_perf 00:03:29.630 LINK event_perf 00:03:29.630 LINK lsvmd 00:03:29.889 LINK led 00:03:29.889 LINK app_repeat 00:03:29.889 LINK vhost 00:03:29.889 LINK scheduler 00:03:29.889 LINK hello_sock 00:03:29.889 LINK thread 00:03:29.889 LINK idxd_perf 00:03:29.889 CC test/nvme/err_injection/err_injection.o 00:03:29.889 CC test/nvme/simple_copy/simple_copy.o 00:03:29.889 CC test/nvme/aer/aer.o 00:03:29.889 CC test/nvme/reserve/reserve.o 00:03:29.889 CC test/nvme/startup/startup.o 00:03:29.889 CC test/nvme/reset/reset.o 00:03:29.889 CC test/nvme/cuse/cuse.o 00:03:29.889 CC test/nvme/e2edp/nvme_dp.o 00:03:29.889 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:29.889 CC test/nvme/fdp/fdp.o 00:03:29.889 LINK memory_ut 00:03:29.889 CC test/nvme/compliance/nvme_compliance.o 00:03:29.889 CC test/nvme/fused_ordering/fused_ordering.o 00:03:29.889 CC test/nvme/overhead/overhead.o 00:03:29.889 CC test/nvme/connect_stress/connect_stress.o 00:03:29.889 CC test/nvme/boot_partition/boot_partition.o 00:03:29.889 CC test/nvme/sgl/sgl.o 00:03:29.889 CC test/blobfs/mkfs/mkfs.o 00:03:29.889 CC test/accel/dif/dif.o 00:03:30.148 CC test/lvol/esnap/esnap.o 00:03:30.148 LINK startup 00:03:30.148 LINK err_injection 00:03:30.148 LINK connect_stress 00:03:30.148 LINK doorbell_aers 00:03:30.148 LINK reserve 00:03:30.148 LINK boot_partition 00:03:30.148 LINK mkfs 00:03:30.148 LINK fused_ordering 00:03:30.148 LINK simple_copy 00:03:30.148 LINK reset 00:03:30.148 LINK nvme_dp 00:03:30.148 LINK aer 00:03:30.148 LINK sgl 00:03:30.148 LINK overhead 00:03:30.148 LINK fdp 00:03:30.148 LINK nvme_compliance 00:03:30.407 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:30.407 CC examples/nvme/reconnect/reconnect.o 00:03:30.407 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:30.407 CC examples/nvme/hotplug/hotplug.o 00:03:30.407 CC examples/nvme/hello_world/hello_world.o 00:03:30.407 CC examples/nvme/arbitration/arbitration.o 00:03:30.407 CC examples/nvme/abort/abort.o 00:03:30.407 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:30.407 CC examples/accel/perf/accel_perf.o 00:03:30.407 CC examples/blob/cli/blobcli.o 00:03:30.407 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:30.407 CC examples/blob/hello_world/hello_blob.o 00:03:30.407 LINK pmr_persistence 00:03:30.664 LINK cmb_copy 00:03:30.664 LINK iscsi_fuzz 00:03:30.664 LINK dif 00:03:30.664 LINK hello_world 00:03:30.664 LINK hotplug 00:03:30.664 LINK arbitration 00:03:30.664 LINK abort 00:03:30.664 LINK reconnect 00:03:30.664 LINK hello_blob 00:03:30.664 LINK hello_fsdev 00:03:30.664 LINK nvme_manage 00:03:30.922 LINK accel_perf 00:03:30.922 LINK blobcli 00:03:30.922 LINK cuse 00:03:31.181 CC test/bdev/bdevio/bdevio.o 00:03:31.440 CC examples/bdev/hello_world/hello_bdev.o 00:03:31.440 CC examples/bdev/bdevperf/bdevperf.o 00:03:31.440 LINK bdevio 00:03:31.440 LINK hello_bdev 00:03:32.008 LINK bdevperf 00:03:32.576 CC examples/nvmf/nvmf/nvmf.o 00:03:32.834 LINK nvmf 00:03:33.772 LINK esnap 00:03:34.030 00:03:34.031 real 0m54.867s 00:03:34.031 user 6m49.709s 00:03:34.031 sys 2m48.897s 00:03:34.031 16:17:03 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:34.031 16:17:03 make -- common/autotest_common.sh@10 -- $ set +x 00:03:34.031 ************************************ 00:03:34.031 END TEST make 00:03:34.031 ************************************ 00:03:34.031 16:17:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:34.031 16:17:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:34.031 16:17:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:34.031 16:17:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.031 16:17:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:34.031 16:17:03 -- pm/common@44 -- $ pid=676680 00:03:34.031 16:17:03 -- pm/common@50 -- $ kill -TERM 676680 00:03:34.031 16:17:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.031 16:17:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:34.031 16:17:03 -- pm/common@44 -- $ pid=676681 00:03:34.031 16:17:03 -- pm/common@50 -- $ kill -TERM 676681 00:03:34.031 16:17:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.031 16:17:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:34.031 16:17:03 -- pm/common@44 -- $ pid=676684 00:03:34.031 16:17:03 -- pm/common@50 -- $ kill -TERM 676684 00:03:34.031 16:17:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.031 16:17:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:34.031 16:17:03 -- pm/common@44 -- $ pid=676707 00:03:34.031 16:17:03 -- pm/common@50 -- $ sudo -E kill -TERM 676707 00:03:34.031 16:17:03 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:34.031 16:17:03 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:34.031 16:17:04 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:34.031 16:17:04 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:34.031 16:17:04 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:34.031 16:17:04 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:34.031 16:17:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.291 16:17:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.291 16:17:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.291 16:17:04 -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.291 16:17:04 -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.291 16:17:04 -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.291 16:17:04 -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.291 16:17:04 -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.291 16:17:04 -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.291 16:17:04 -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.291 16:17:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.291 16:17:04 -- scripts/common.sh@344 -- # case "$op" in 00:03:34.291 16:17:04 -- scripts/common.sh@345 -- # : 1 00:03:34.291 16:17:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.291 16:17:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.291 16:17:04 -- scripts/common.sh@365 -- # decimal 1 00:03:34.291 16:17:04 -- scripts/common.sh@353 -- # local d=1 00:03:34.291 16:17:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.291 16:17:04 -- scripts/common.sh@355 -- # echo 1 00:03:34.291 16:17:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.291 16:17:04 -- scripts/common.sh@366 -- # decimal 2 00:03:34.291 16:17:04 -- scripts/common.sh@353 -- # local d=2 00:03:34.291 16:17:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.291 16:17:04 -- scripts/common.sh@355 -- # echo 2 00:03:34.291 16:17:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.291 16:17:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.291 16:17:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.291 16:17:04 -- scripts/common.sh@368 -- # return 0 00:03:34.291 16:17:04 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.291 16:17:04 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:34.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.291 --rc genhtml_branch_coverage=1 00:03:34.291 --rc genhtml_function_coverage=1 00:03:34.291 --rc genhtml_legend=1 00:03:34.291 --rc geninfo_all_blocks=1 00:03:34.291 --rc geninfo_unexecuted_blocks=1 00:03:34.291 00:03:34.291 ' 00:03:34.291 16:17:04 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:34.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.291 --rc genhtml_branch_coverage=1 00:03:34.291 --rc genhtml_function_coverage=1 00:03:34.291 --rc genhtml_legend=1 00:03:34.291 --rc geninfo_all_blocks=1 00:03:34.291 --rc geninfo_unexecuted_blocks=1 00:03:34.291 00:03:34.291 ' 00:03:34.291 16:17:04 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:34.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.291 --rc genhtml_branch_coverage=1 00:03:34.291 --rc genhtml_function_coverage=1 00:03:34.291 --rc genhtml_legend=1 00:03:34.291 --rc geninfo_all_blocks=1 00:03:34.291 --rc geninfo_unexecuted_blocks=1 00:03:34.291 00:03:34.291 ' 00:03:34.291 16:17:04 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:34.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.291 --rc genhtml_branch_coverage=1 00:03:34.291 --rc genhtml_function_coverage=1 00:03:34.291 --rc genhtml_legend=1 00:03:34.291 --rc geninfo_all_blocks=1 00:03:34.291 --rc geninfo_unexecuted_blocks=1 00:03:34.291 00:03:34.291 ' 00:03:34.291 16:17:04 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:34.291 16:17:04 -- nvmf/common.sh@7 -- # uname -s 00:03:34.291 16:17:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:34.291 16:17:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:34.291 16:17:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:34.291 16:17:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:34.291 16:17:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:34.291 16:17:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:34.291 16:17:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:34.291 16:17:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:34.291 16:17:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:34.291 16:17:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:34.291 16:17:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:34.291 16:17:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:34.291 16:17:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:34.291 16:17:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:34.291 16:17:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:34.291 16:17:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:34.291 16:17:04 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:34.291 16:17:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:34.291 16:17:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:34.291 16:17:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:34.291 16:17:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:34.291 16:17:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.291 16:17:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.291 16:17:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.291 16:17:04 -- paths/export.sh@5 -- # export PATH 00:03:34.291 16:17:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.291 16:17:04 -- nvmf/common.sh@51 -- # : 0 00:03:34.291 16:17:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:34.291 16:17:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:34.291 16:17:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:34.291 16:17:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:34.291 16:17:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:34.291 16:17:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:34.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:34.291 16:17:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:34.291 16:17:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:34.291 16:17:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:34.291 16:17:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:34.291 16:17:04 -- spdk/autotest.sh@32 -- # uname -s 00:03:34.291 16:17:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:34.291 16:17:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:34.291 16:17:04 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:34.291 16:17:04 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:34.291 16:17:04 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:34.291 16:17:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:34.291 16:17:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:34.291 16:17:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:34.291 16:17:04 -- spdk/autotest.sh@48 -- # udevadm_pid=756603 00:03:34.291 16:17:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:34.291 16:17:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:34.291 16:17:04 -- pm/common@17 -- # local monitor 00:03:34.291 16:17:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.291 16:17:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.291 16:17:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.291 16:17:04 -- pm/common@21 -- # date +%s 00:03:34.291 16:17:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.291 16:17:04 -- pm/common@21 -- # date +%s 00:03:34.291 16:17:04 -- pm/common@25 -- # sleep 1 00:03:34.291 16:17:04 -- pm/common@21 -- # date +%s 00:03:34.291 16:17:04 -- pm/common@21 -- # date +%s 00:03:34.291 16:17:04 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734189424 00:03:34.291 16:17:04 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734189424 00:03:34.291 16:17:04 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734189424 00:03:34.291 16:17:04 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734189424 00:03:34.291 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734189424_collect-vmstat.pm.log 00:03:34.291 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734189424_collect-cpu-load.pm.log 00:03:34.291 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734189424_collect-cpu-temp.pm.log 00:03:34.291 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734189424_collect-bmc-pm.bmc.pm.log 00:03:35.231 16:17:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:35.231 16:17:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:35.231 16:17:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:35.231 16:17:05 -- common/autotest_common.sh@10 -- # set +x 00:03:35.231 16:17:05 -- spdk/autotest.sh@59 -- # create_test_list 00:03:35.231 16:17:05 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:35.231 16:17:05 -- common/autotest_common.sh@10 -- # set +x 00:03:35.231 16:17:05 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:35.231 16:17:05 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:35.231 16:17:05 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:35.231 16:17:05 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:35.231 16:17:05 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:35.231 16:17:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:35.231 16:17:05 -- common/autotest_common.sh@1457 -- # uname 00:03:35.231 16:17:05 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:35.231 16:17:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:35.231 16:17:05 -- common/autotest_common.sh@1477 -- # uname 00:03:35.231 16:17:05 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:35.231 16:17:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:35.231 16:17:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:35.490 lcov: LCOV version 1.15 00:03:35.490 16:17:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:53.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:53.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:00.147 16:17:29 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:00.147 16:17:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.147 16:17:29 -- common/autotest_common.sh@10 -- # set +x 00:04:00.147 16:17:29 -- spdk/autotest.sh@78 -- # rm -f 00:04:00.147 16:17:29 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.684 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:02.943 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:02.943 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:02.943 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:02.943 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:02.943 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:02.943 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:02.943 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:02.943 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:02.943 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:02.944 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:02.944 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:02.944 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:03.202 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:03.202 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:03.202 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:03.202 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:03.202 16:17:33 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:03.202 16:17:33 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:03.202 16:17:33 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:03.202 16:17:33 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:03.203 16:17:33 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:03.203 16:17:33 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:03.203 16:17:33 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:03.203 16:17:33 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:03.203 16:17:33 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:03.203 16:17:33 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:03.203 16:17:33 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:03.203 16:17:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.203 16:17:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:03.203 16:17:33 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:03.203 16:17:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.203 16:17:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.203 16:17:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:03.203 16:17:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:03.203 16:17:33 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:03.203 No valid GPT data, bailing 00:04:03.203 16:17:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.203 16:17:33 -- scripts/common.sh@394 -- # pt= 00:04:03.203 16:17:33 -- scripts/common.sh@395 -- # return 1 00:04:03.203 16:17:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:03.203 1+0 records in 00:04:03.203 1+0 records out 00:04:03.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480043 s, 218 MB/s 00:04:03.203 16:17:33 -- spdk/autotest.sh@105 -- # sync 00:04:03.203 16:17:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:03.203 16:17:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:03.203 16:17:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:09.773 16:17:38 -- spdk/autotest.sh@111 -- # uname -s 00:04:09.773 16:17:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:09.773 16:17:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:09.773 16:17:38 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:11.678 Hugepages 00:04:11.678 node hugesize free / total 00:04:11.678 node0 1048576kB 0 / 0 00:04:11.678 node0 2048kB 0 / 0 00:04:11.678 node1 1048576kB 0 / 0 00:04:11.678 node1 2048kB 0 / 0 00:04:11.678 00:04:11.678 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.678 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:11.678 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:11.678 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:11.678 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:11.678 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:11.678 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:11.678 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:11.678 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:11.678 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:11.678 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:11.678 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:11.678 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:11.678 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:11.678 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:11.678 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:11.679 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:11.679 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:11.679 16:17:41 -- spdk/autotest.sh@117 -- # uname -s 00:04:11.679 16:17:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:11.679 16:17:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:11.679 16:17:41 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.969 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.969 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:15.228 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:15.486 16:17:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:16.423 16:17:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:16.423 16:17:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:16.423 16:17:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:16.423 16:17:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:16.423 16:17:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:16.423 16:17:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:16.423 16:17:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.423 16:17:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:16.423 16:17:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:16.423 16:17:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:16.423 16:17:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:16.423 16:17:46 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.713 Waiting for block devices as requested 00:04:19.713 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:19.713 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:19.713 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:19.713 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:19.713 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:19.713 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:19.713 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:19.713 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:19.972 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:19.972 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:19.972 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:20.231 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:20.231 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:20.231 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:20.489 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:20.489 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:20.489 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:20.749 16:17:50 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:20.749 16:17:50 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:20.749 16:17:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:20.749 16:17:50 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:20.749 16:17:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:20.749 16:17:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:20.749 16:17:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:20.749 16:17:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:20.749 16:17:50 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:20.749 16:17:50 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:20.749 16:17:50 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:20.749 16:17:50 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:20.749 16:17:50 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:20.749 16:17:50 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:20.749 16:17:50 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:20.749 16:17:50 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:20.749 16:17:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:20.749 16:17:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:20.749 16:17:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:20.749 16:17:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:20.749 16:17:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:20.749 16:17:50 -- common/autotest_common.sh@1543 -- # continue 00:04:20.749 16:17:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:20.749 16:17:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.749 16:17:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.749 16:17:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:20.749 16:17:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.749 16:17:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.749 16:17:50 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.039 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.039 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.298 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:24.557 16:17:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:24.557 16:17:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.557 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:04:24.557 16:17:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:24.557 16:17:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:24.557 16:17:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.557 16:17:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:24.557 16:17:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:24.557 16:17:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:24.557 16:17:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:24.557 16:17:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:24.557 16:17:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:24.557 16:17:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:24.557 16:17:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.557 16:17:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:24.557 16:17:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:24.557 16:17:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:24.557 16:17:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:24.557 16:17:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:24.557 16:17:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:24.557 16:17:54 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:24.817 16:17:54 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:24.817 16:17:54 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:24.817 16:17:54 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:24.817 16:17:54 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:24.817 16:17:54 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:24.817 16:17:54 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=770578 00:04:24.817 16:17:54 -- common/autotest_common.sh@1585 -- # waitforlisten 770578 00:04:24.817 16:17:54 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.817 16:17:54 -- common/autotest_common.sh@835 -- # '[' -z 770578 ']' 00:04:24.817 16:17:54 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.817 16:17:54 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.817 16:17:54 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.817 16:17:54 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.817 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:04:24.817 [2024-12-14 16:17:54.697183] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:24.817 [2024-12-14 16:17:54.697231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770578 ] 00:04:24.817 [2024-12-14 16:17:54.774260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.817 [2024-12-14 16:17:54.796886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.076 16:17:55 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.076 16:17:55 -- common/autotest_common.sh@868 -- # return 0 00:04:25.076 16:17:55 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:25.076 16:17:55 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:25.076 16:17:55 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:28.368 nvme0n1 00:04:28.368 16:17:58 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:28.368 [2024-12-14 16:17:58.184348] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:28.368 [2024-12-14 16:17:58.184378] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:28.368 request: 00:04:28.368 { 00:04:28.368 "nvme_ctrlr_name": "nvme0", 00:04:28.368 "password": "test", 00:04:28.368 "method": "bdev_nvme_opal_revert", 00:04:28.368 "req_id": 1 00:04:28.368 } 00:04:28.368 Got JSON-RPC error response 00:04:28.368 response: 00:04:28.368 { 00:04:28.368 "code": -32603, 00:04:28.368 "message": "Internal error" 00:04:28.368 } 00:04:28.368 16:17:58 -- common/autotest_common.sh@1591 -- # true 00:04:28.368 16:17:58 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:28.368 16:17:58 -- common/autotest_common.sh@1595 -- # killprocess 770578 00:04:28.368 16:17:58 -- common/autotest_common.sh@954 -- # '[' -z 770578 ']' 00:04:28.368 16:17:58 -- common/autotest_common.sh@958 -- # kill -0 770578 00:04:28.368 16:17:58 -- common/autotest_common.sh@959 -- # uname 00:04:28.368 16:17:58 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.368 16:17:58 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770578 00:04:28.368 16:17:58 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.368 16:17:58 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.368 16:17:58 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770578' 00:04:28.368 killing process with pid 770578 00:04:28.368 16:17:58 -- common/autotest_common.sh@973 -- # kill 770578 00:04:28.368 16:17:58 -- common/autotest_common.sh@978 -- # wait 770578 00:04:30.047 16:17:59 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:30.047 16:17:59 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:30.047 16:17:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:30.047 16:17:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:30.047 16:17:59 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:30.047 16:17:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.047 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:04:30.047 16:17:59 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:30.047 16:17:59 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:30.048 16:17:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.048 16:17:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.048 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:04:30.048 ************************************ 00:04:30.048 START TEST env 00:04:30.048 ************************************ 00:04:30.048 16:17:59 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:30.048 * Looking for test storage... 00:04:30.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:30.048 16:17:59 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:30.048 16:17:59 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:30.048 16:17:59 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:30.048 16:18:00 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:30.048 16:18:00 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.048 16:18:00 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.048 16:18:00 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.048 16:18:00 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.048 16:18:00 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.048 16:18:00 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.048 16:18:00 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.048 16:18:00 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.048 16:18:00 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.048 16:18:00 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.048 16:18:00 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.048 16:18:00 env -- scripts/common.sh@344 -- # case "$op" in 00:04:30.048 16:18:00 env -- scripts/common.sh@345 -- # : 1 00:04:30.048 16:18:00 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.048 16:18:00 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.048 16:18:00 env -- scripts/common.sh@365 -- # decimal 1 00:04:30.048 16:18:00 env -- scripts/common.sh@353 -- # local d=1 00:04:30.048 16:18:00 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.048 16:18:00 env -- scripts/common.sh@355 -- # echo 1 00:04:30.048 16:18:00 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.048 16:18:00 env -- scripts/common.sh@366 -- # decimal 2 00:04:30.048 16:18:00 env -- scripts/common.sh@353 -- # local d=2 00:04:30.048 16:18:00 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.048 16:18:00 env -- scripts/common.sh@355 -- # echo 2 00:04:30.048 16:18:00 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.048 16:18:00 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.048 16:18:00 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.048 16:18:00 env -- scripts/common.sh@368 -- # return 0 00:04:30.048 16:18:00 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.048 16:18:00 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:30.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.048 --rc genhtml_branch_coverage=1 00:04:30.048 --rc genhtml_function_coverage=1 00:04:30.048 --rc genhtml_legend=1 00:04:30.048 --rc geninfo_all_blocks=1 00:04:30.048 --rc geninfo_unexecuted_blocks=1 00:04:30.048 00:04:30.048 ' 00:04:30.048 16:18:00 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:30.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.048 --rc genhtml_branch_coverage=1 00:04:30.048 --rc genhtml_function_coverage=1 00:04:30.048 --rc genhtml_legend=1 00:04:30.048 --rc geninfo_all_blocks=1 00:04:30.048 --rc geninfo_unexecuted_blocks=1 00:04:30.048 00:04:30.048 ' 00:04:30.048 16:18:00 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:30.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.048 --rc genhtml_branch_coverage=1 00:04:30.048 --rc genhtml_function_coverage=1 00:04:30.048 --rc genhtml_legend=1 00:04:30.048 --rc geninfo_all_blocks=1 00:04:30.048 --rc geninfo_unexecuted_blocks=1 00:04:30.048 00:04:30.048 ' 00:04:30.048 16:18:00 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:30.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.048 --rc genhtml_branch_coverage=1 00:04:30.048 --rc genhtml_function_coverage=1 00:04:30.048 --rc genhtml_legend=1 00:04:30.048 --rc geninfo_all_blocks=1 00:04:30.048 --rc geninfo_unexecuted_blocks=1 00:04:30.048 00:04:30.048 ' 00:04:30.048 16:18:00 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.048 16:18:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.048 16:18:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.048 16:18:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.048 ************************************ 00:04:30.048 START TEST env_memory 00:04:30.048 ************************************ 00:04:30.048 16:18:00 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.048 00:04:30.048 00:04:30.048 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.048 http://cunit.sourceforge.net/ 00:04:30.048 00:04:30.048 00:04:30.048 Suite: memory 00:04:30.048 Test: alloc and free memory map ...[2024-12-14 16:18:00.120491] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:30.048 passed 00:04:30.307 Test: mem map translation ...[2024-12-14 16:18:00.141091] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:30.307 [2024-12-14 16:18:00.141108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:30.307 [2024-12-14 16:18:00.141146] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:30.307 [2024-12-14 16:18:00.141155] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:30.307 passed 00:04:30.307 Test: mem map registration ...[2024-12-14 16:18:00.180850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:30.307 [2024-12-14 16:18:00.180867] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:30.307 passed 00:04:30.307 Test: mem map adjacent registrations ...passed 00:04:30.307 00:04:30.307 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.307 suites 1 1 n/a 0 0 00:04:30.307 tests 4 4 4 0 0 00:04:30.307 asserts 152 152 152 0 n/a 00:04:30.307 00:04:30.307 Elapsed time = 0.141 seconds 00:04:30.307 00:04:30.307 real 0m0.154s 00:04:30.307 user 0m0.146s 00:04:30.307 sys 0m0.008s 00:04:30.307 16:18:00 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.307 16:18:00 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:30.307 ************************************ 00:04:30.307 END TEST env_memory 00:04:30.307 ************************************ 00:04:30.307 16:18:00 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.307 16:18:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.307 16:18:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.307 16:18:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.307 ************************************ 00:04:30.307 START TEST env_vtophys 00:04:30.307 ************************************ 00:04:30.307 16:18:00 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.307 EAL: lib.eal log level changed from notice to debug 00:04:30.307 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.307 EAL: Detected lcore 1 as core 1 on socket 0 00:04:30.307 EAL: Detected lcore 2 as core 2 on socket 0 00:04:30.307 EAL: Detected lcore 3 as core 3 on socket 0 00:04:30.307 EAL: Detected lcore 4 as core 4 on socket 0 00:04:30.307 EAL: Detected lcore 5 as core 5 on socket 0 00:04:30.307 EAL: Detected lcore 6 as core 6 on socket 0 00:04:30.307 EAL: Detected lcore 7 as core 8 on socket 0 00:04:30.307 EAL: Detected lcore 8 as core 9 on socket 0 00:04:30.307 EAL: Detected lcore 9 as core 10 on socket 0 00:04:30.307 EAL: Detected lcore 10 as core 11 on socket 0 00:04:30.307 EAL: Detected lcore 11 as core 12 on socket 0 00:04:30.307 EAL: Detected lcore 12 as core 13 on socket 0 00:04:30.307 EAL: Detected lcore 13 as core 16 on socket 0 00:04:30.307 EAL: Detected lcore 14 as core 17 on socket 0 00:04:30.307 EAL: Detected lcore 15 as core 18 on socket 0 00:04:30.307 EAL: Detected lcore 16 as core 19 on socket 0 00:04:30.307 EAL: Detected lcore 17 as core 20 on socket 0 00:04:30.307 EAL: Detected lcore 18 as core 21 on socket 0 00:04:30.307 EAL: Detected lcore 19 as core 25 on socket 0 00:04:30.307 EAL: Detected lcore 20 as core 26 on socket 0 00:04:30.307 EAL: Detected lcore 21 as core 27 on socket 0 00:04:30.307 EAL: Detected lcore 22 as core 28 on socket 0 00:04:30.307 EAL: Detected lcore 23 as core 29 on socket 0 00:04:30.307 EAL: Detected lcore 24 as core 0 on socket 1 00:04:30.307 EAL: Detected lcore 25 as core 1 on socket 1 00:04:30.307 EAL: Detected lcore 26 as core 2 on socket 1 00:04:30.307 EAL: Detected lcore 27 as core 3 on socket 1 00:04:30.307 EAL: Detected lcore 28 as core 4 on socket 1 00:04:30.307 EAL: Detected lcore 29 as core 5 on socket 1 00:04:30.307 EAL: Detected lcore 30 as core 6 on socket 1 00:04:30.307 EAL: Detected lcore 31 as core 8 on socket 1 00:04:30.307 EAL: Detected lcore 32 as core 9 on socket 1 00:04:30.307 EAL: Detected lcore 33 as core 10 on socket 1 00:04:30.307 EAL: Detected lcore 34 as core 11 on socket 1 00:04:30.307 EAL: Detected lcore 35 as core 12 on socket 1 00:04:30.307 EAL: Detected lcore 36 as core 13 on socket 1 00:04:30.307 EAL: Detected lcore 37 as core 16 on socket 1 00:04:30.307 EAL: Detected lcore 38 as core 17 on socket 1 00:04:30.307 EAL: Detected lcore 39 as core 18 on socket 1 00:04:30.307 EAL: Detected lcore 40 as core 19 on socket 1 00:04:30.307 EAL: Detected lcore 41 as core 20 on socket 1 00:04:30.307 EAL: Detected lcore 42 as core 21 on socket 1 00:04:30.307 EAL: Detected lcore 43 as core 25 on socket 1 00:04:30.307 EAL: Detected lcore 44 as core 26 on socket 1 00:04:30.307 EAL: Detected lcore 45 as core 27 on socket 1 00:04:30.307 EAL: Detected lcore 46 as core 28 on socket 1 00:04:30.307 EAL: Detected lcore 47 as core 29 on socket 1 00:04:30.307 EAL: Detected lcore 48 as core 0 on socket 0 00:04:30.307 EAL: Detected lcore 49 as core 1 on socket 0 00:04:30.307 EAL: Detected lcore 50 as core 2 on socket 0 00:04:30.307 EAL: Detected lcore 51 as core 3 on socket 0 00:04:30.307 EAL: Detected lcore 52 as core 4 on socket 0 00:04:30.307 EAL: Detected lcore 53 as core 5 on socket 0 00:04:30.307 EAL: Detected lcore 54 as core 6 on socket 0 00:04:30.307 EAL: Detected lcore 55 as core 8 on socket 0 00:04:30.307 EAL: Detected lcore 56 as core 9 on socket 0 00:04:30.307 EAL: Detected lcore 57 as core 10 on socket 0 00:04:30.307 EAL: Detected lcore 58 as core 11 on socket 0 00:04:30.307 EAL: Detected lcore 59 as core 12 on socket 0 00:04:30.307 EAL: Detected lcore 60 as core 13 on socket 0 00:04:30.307 EAL: Detected lcore 61 as core 16 on socket 0 00:04:30.307 EAL: Detected lcore 62 as core 17 on socket 0 00:04:30.307 EAL: Detected lcore 63 as core 18 on socket 0 00:04:30.307 EAL: Detected lcore 64 as core 19 on socket 0 00:04:30.307 EAL: Detected lcore 65 as core 20 on socket 0 00:04:30.307 EAL: Detected lcore 66 as core 21 on socket 0 00:04:30.307 EAL: Detected lcore 67 as core 25 on socket 0 00:04:30.307 EAL: Detected lcore 68 as core 26 on socket 0 00:04:30.307 EAL: Detected lcore 69 as core 27 on socket 0 00:04:30.307 EAL: Detected lcore 70 as core 28 on socket 0 00:04:30.307 EAL: Detected lcore 71 as core 29 on socket 0 00:04:30.307 EAL: Detected lcore 72 as core 0 on socket 1 00:04:30.307 EAL: Detected lcore 73 as core 1 on socket 1 00:04:30.307 EAL: Detected lcore 74 as core 2 on socket 1 00:04:30.307 EAL: Detected lcore 75 as core 3 on socket 1 00:04:30.307 EAL: Detected lcore 76 as core 4 on socket 1 00:04:30.307 EAL: Detected lcore 77 as core 5 on socket 1 00:04:30.307 EAL: Detected lcore 78 as core 6 on socket 1 00:04:30.307 EAL: Detected lcore 79 as core 8 on socket 1 00:04:30.307 EAL: Detected lcore 80 as core 9 on socket 1 00:04:30.307 EAL: Detected lcore 81 as core 10 on socket 1 00:04:30.307 EAL: Detected lcore 82 as core 11 on socket 1 00:04:30.307 EAL: Detected lcore 83 as core 12 on socket 1 00:04:30.307 EAL: Detected lcore 84 as core 13 on socket 1 00:04:30.307 EAL: Detected lcore 85 as core 16 on socket 1 00:04:30.307 EAL: Detected lcore 86 as core 17 on socket 1 00:04:30.307 EAL: Detected lcore 87 as core 18 on socket 1 00:04:30.307 EAL: Detected lcore 88 as core 19 on socket 1 00:04:30.307 EAL: Detected lcore 89 as core 20 on socket 1 00:04:30.307 EAL: Detected lcore 90 as core 21 on socket 1 00:04:30.307 EAL: Detected lcore 91 as core 25 on socket 1 00:04:30.307 EAL: Detected lcore 92 as core 26 on socket 1 00:04:30.307 EAL: Detected lcore 93 as core 27 on socket 1 00:04:30.307 EAL: Detected lcore 94 as core 28 on socket 1 00:04:30.307 EAL: Detected lcore 95 as core 29 on socket 1 00:04:30.307 EAL: Maximum logical cores by configuration: 128 00:04:30.307 EAL: Detected CPU lcores: 96 00:04:30.307 EAL: Detected NUMA nodes: 2 00:04:30.307 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:30.307 EAL: Detected shared linkage of DPDK 00:04:30.307 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:30.307 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:30.307 EAL: Registered [vdev] bus. 00:04:30.307 EAL: bus.vdev log level changed from disabled to notice 00:04:30.307 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:30.307 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:30.307 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:30.307 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:30.307 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:30.307 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:30.307 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:30.307 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:30.307 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.307 EAL: No shared files mode enabled, IPC is disabled 00:04:30.307 EAL: Bus pci wants IOVA as 'DC' 00:04:30.307 EAL: Bus vdev wants IOVA as 'DC' 00:04:30.307 EAL: Buses did not request a specific IOVA mode. 00:04:30.307 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:30.307 EAL: Selected IOVA mode 'VA' 00:04:30.307 EAL: Probing VFIO support... 00:04:30.307 EAL: IOMMU type 1 (Type 1) is supported 00:04:30.307 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:30.307 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:30.307 EAL: VFIO support initialized 00:04:30.307 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.307 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.307 EAL: Setting up physically contiguous memory... 00:04:30.307 EAL: Setting maximum number of open files to 524288 00:04:30.307 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.307 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:30.307 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.307 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.307 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.307 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.307 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.307 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.307 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.307 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.307 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.307 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.307 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.307 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.307 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.307 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.307 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.307 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.307 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.307 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.307 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.307 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.307 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.307 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.307 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.307 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.307 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.307 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:30.307 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.307 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:30.307 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.307 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.307 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:30.307 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:30.307 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.307 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:30.307 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.307 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.307 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:30.307 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:30.307 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.308 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:30.308 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.308 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.308 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:30.308 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:30.308 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.308 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:30.308 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.308 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.308 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:30.308 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:30.308 EAL: Hugepages will be freed exactly as allocated. 00:04:30.308 EAL: No shared files mode enabled, IPC is disabled 00:04:30.308 EAL: No shared files mode enabled, IPC is disabled 00:04:30.308 EAL: TSC frequency is ~2100000 KHz 00:04:30.308 EAL: Main lcore 0 is ready (tid=7f2192602a00;cpuset=[0]) 00:04:30.308 EAL: Trying to obtain current memory policy. 00:04:30.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.308 EAL: Restoring previous memory policy: 0 00:04:30.308 EAL: request: mp_malloc_sync 00:04:30.308 EAL: No shared files mode enabled, IPC is disabled 00:04:30.308 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.308 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:30.308 EAL: probe driver: 8086:37d2 net_i40e 00:04:30.308 EAL: Not managed by a supported kernel driver, skipped 00:04:30.308 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:30.308 EAL: probe driver: 8086:37d2 net_i40e 00:04:30.308 EAL: Not managed by a supported kernel driver, skipped 00:04:30.308 EAL: No shared files mode enabled, IPC is disabled 00:04:30.308 EAL: No shared files mode enabled, IPC is disabled 00:04:30.308 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.308 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.566 00:04:30.566 00:04:30.566 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.566 http://cunit.sourceforge.net/ 00:04:30.566 00:04:30.566 00:04:30.566 Suite: components_suite 00:04:30.566 Test: vtophys_malloc_test ...passed 00:04:30.566 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.566 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.566 EAL: Restoring previous memory policy: 4 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.567 EAL: Trying to obtain current memory policy. 00:04:30.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.567 EAL: Restoring previous memory policy: 4 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.567 EAL: Trying to obtain current memory policy. 00:04:30.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.567 EAL: Restoring previous memory policy: 4 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.567 EAL: Trying to obtain current memory policy. 00:04:30.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.567 EAL: Restoring previous memory policy: 4 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.567 EAL: Trying to obtain current memory policy. 00:04:30.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.567 EAL: Restoring previous memory policy: 4 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was expanded by 34MB 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was shrunk by 34MB 00:04:30.567 EAL: Trying to obtain current memory policy. 00:04:30.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.567 EAL: Restoring previous memory policy: 4 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was shrunk by 66MB 00:04:30.567 EAL: Trying to obtain current memory policy. 00:04:30.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.567 EAL: Restoring previous memory policy: 4 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was expanded by 130MB 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was shrunk by 130MB 00:04:30.567 EAL: Trying to obtain current memory policy. 00:04:30.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.567 EAL: Restoring previous memory policy: 4 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was expanded by 258MB 00:04:30.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.567 EAL: request: mp_malloc_sync 00:04:30.567 EAL: No shared files mode enabled, IPC is disabled 00:04:30.567 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.567 EAL: Trying to obtain current memory policy. 00:04:30.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.826 EAL: Restoring previous memory policy: 4 00:04:30.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.826 EAL: request: mp_malloc_sync 00:04:30.826 EAL: No shared files mode enabled, IPC is disabled 00:04:30.826 EAL: Heap on socket 0 was expanded by 514MB 00:04:30.826 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.826 EAL: request: mp_malloc_sync 00:04:30.826 EAL: No shared files mode enabled, IPC is disabled 00:04:30.826 EAL: Heap on socket 0 was shrunk by 514MB 00:04:30.826 EAL: Trying to obtain current memory policy. 00:04:30.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.085 EAL: Restoring previous memory policy: 4 00:04:31.085 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.085 EAL: request: mp_malloc_sync 00:04:31.085 EAL: No shared files mode enabled, IPC is disabled 00:04:31.085 EAL: Heap on socket 0 was expanded by 1026MB 00:04:31.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.344 EAL: request: mp_malloc_sync 00:04:31.344 EAL: No shared files mode enabled, IPC is disabled 00:04:31.344 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.344 passed 00:04:31.344 00:04:31.344 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.344 suites 1 1 n/a 0 0 00:04:31.344 tests 2 2 2 0 0 00:04:31.344 asserts 497 497 497 0 n/a 00:04:31.344 00:04:31.344 Elapsed time = 0.962 seconds 00:04:31.344 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.344 EAL: request: mp_malloc_sync 00:04:31.344 EAL: No shared files mode enabled, IPC is disabled 00:04:31.344 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.344 EAL: No shared files mode enabled, IPC is disabled 00:04:31.344 EAL: No shared files mode enabled, IPC is disabled 00:04:31.344 EAL: No shared files mode enabled, IPC is disabled 00:04:31.344 00:04:31.344 real 0m1.089s 00:04:31.344 user 0m0.634s 00:04:31.344 sys 0m0.430s 00:04:31.344 16:18:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.344 16:18:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.344 ************************************ 00:04:31.344 END TEST env_vtophys 00:04:31.344 ************************************ 00:04:31.344 16:18:01 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.344 16:18:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.344 16:18:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.344 16:18:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.602 ************************************ 00:04:31.602 START TEST env_pci 00:04:31.602 ************************************ 00:04:31.602 16:18:01 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.602 00:04:31.602 00:04:31.602 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.602 http://cunit.sourceforge.net/ 00:04:31.602 00:04:31.602 00:04:31.602 Suite: pci 00:04:31.602 Test: pci_hook ...[2024-12-14 16:18:01.473547] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 771931 has claimed it 00:04:31.602 EAL: Cannot find device (10000:00:01.0) 00:04:31.602 EAL: Failed to attach device on primary process 00:04:31.602 passed 00:04:31.602 00:04:31.602 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.602 suites 1 1 n/a 0 0 00:04:31.602 tests 1 1 1 0 0 00:04:31.602 asserts 25 25 25 0 n/a 00:04:31.602 00:04:31.602 Elapsed time = 0.029 seconds 00:04:31.602 00:04:31.602 real 0m0.049s 00:04:31.602 user 0m0.015s 00:04:31.602 sys 0m0.034s 00:04:31.602 16:18:01 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.602 16:18:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:31.602 ************************************ 00:04:31.602 END TEST env_pci 00:04:31.602 ************************************ 00:04:31.602 16:18:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.602 16:18:01 env -- env/env.sh@15 -- # uname 00:04:31.602 16:18:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.602 16:18:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.602 16:18:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.602 16:18:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:31.602 16:18:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.602 16:18:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.602 ************************************ 00:04:31.602 START TEST env_dpdk_post_init 00:04:31.602 ************************************ 00:04:31.602 16:18:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.602 EAL: Detected CPU lcores: 96 00:04:31.602 EAL: Detected NUMA nodes: 2 00:04:31.602 EAL: Detected shared linkage of DPDK 00:04:31.602 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.602 EAL: Selected IOVA mode 'VA' 00:04:31.602 EAL: VFIO support initialized 00:04:31.602 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.861 EAL: Using IOMMU type 1 (Type 1) 00:04:31.861 EAL: Ignore mapping IO port bar(1) 00:04:31.861 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:31.861 EAL: Ignore mapping IO port bar(1) 00:04:31.861 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:31.861 EAL: Ignore mapping IO port bar(1) 00:04:31.861 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:31.861 EAL: Ignore mapping IO port bar(1) 00:04:31.861 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:31.861 EAL: Ignore mapping IO port bar(1) 00:04:31.861 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:31.861 EAL: Ignore mapping IO port bar(1) 00:04:31.861 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:31.861 EAL: Ignore mapping IO port bar(1) 00:04:31.861 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:31.861 EAL: Ignore mapping IO port bar(1) 00:04:31.861 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:32.798 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:32.798 EAL: Ignore mapping IO port bar(1) 00:04:32.798 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:32.798 EAL: Ignore mapping IO port bar(1) 00:04:32.798 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:32.798 EAL: Ignore mapping IO port bar(1) 00:04:32.798 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:32.798 EAL: Ignore mapping IO port bar(1) 00:04:32.798 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:32.798 EAL: Ignore mapping IO port bar(1) 00:04:32.798 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:32.798 EAL: Ignore mapping IO port bar(1) 00:04:32.798 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:32.798 EAL: Ignore mapping IO port bar(1) 00:04:32.798 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:32.798 EAL: Ignore mapping IO port bar(1) 00:04:32.798 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:36.084 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:36.084 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:36.084 Starting DPDK initialization... 00:04:36.084 Starting SPDK post initialization... 00:04:36.084 SPDK NVMe probe 00:04:36.084 Attaching to 0000:5e:00.0 00:04:36.084 Attached to 0000:5e:00.0 00:04:36.084 Cleaning up... 00:04:36.084 00:04:36.084 real 0m4.371s 00:04:36.084 user 0m3.270s 00:04:36.084 sys 0m0.171s 00:04:36.084 16:18:05 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.084 16:18:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.084 ************************************ 00:04:36.084 END TEST env_dpdk_post_init 00:04:36.084 ************************************ 00:04:36.084 16:18:05 env -- env/env.sh@26 -- # uname 00:04:36.084 16:18:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.084 16:18:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.084 16:18:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.084 16:18:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.084 16:18:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.084 ************************************ 00:04:36.084 START TEST env_mem_callbacks 00:04:36.084 ************************************ 00:04:36.084 16:18:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.084 EAL: Detected CPU lcores: 96 00:04:36.084 EAL: Detected NUMA nodes: 2 00:04:36.084 EAL: Detected shared linkage of DPDK 00:04:36.084 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.084 EAL: Selected IOVA mode 'VA' 00:04:36.084 EAL: VFIO support initialized 00:04:36.084 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.084 00:04:36.084 00:04:36.084 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.084 http://cunit.sourceforge.net/ 00:04:36.084 00:04:36.084 00:04:36.084 Suite: memory 00:04:36.084 Test: test ... 00:04:36.084 register 0x200000200000 2097152 00:04:36.084 malloc 3145728 00:04:36.084 register 0x200000400000 4194304 00:04:36.084 buf 0x200000500000 len 3145728 PASSED 00:04:36.084 malloc 64 00:04:36.084 buf 0x2000004fff40 len 64 PASSED 00:04:36.084 malloc 4194304 00:04:36.084 register 0x200000800000 6291456 00:04:36.084 buf 0x200000a00000 len 4194304 PASSED 00:04:36.084 free 0x200000500000 3145728 00:04:36.084 free 0x2000004fff40 64 00:04:36.084 unregister 0x200000400000 4194304 PASSED 00:04:36.084 free 0x200000a00000 4194304 00:04:36.084 unregister 0x200000800000 6291456 PASSED 00:04:36.084 malloc 8388608 00:04:36.084 register 0x200000400000 10485760 00:04:36.084 buf 0x200000600000 len 8388608 PASSED 00:04:36.084 free 0x200000600000 8388608 00:04:36.084 unregister 0x200000400000 10485760 PASSED 00:04:36.084 passed 00:04:36.084 00:04:36.084 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.084 suites 1 1 n/a 0 0 00:04:36.084 tests 1 1 1 0 0 00:04:36.084 asserts 15 15 15 0 n/a 00:04:36.084 00:04:36.084 Elapsed time = 0.008 seconds 00:04:36.084 00:04:36.084 real 0m0.059s 00:04:36.084 user 0m0.020s 00:04:36.084 sys 0m0.039s 00:04:36.084 16:18:06 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.084 16:18:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:36.084 ************************************ 00:04:36.084 END TEST env_mem_callbacks 00:04:36.084 ************************************ 00:04:36.084 00:04:36.084 real 0m6.250s 00:04:36.084 user 0m4.331s 00:04:36.084 sys 0m0.999s 00:04:36.084 16:18:06 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.084 16:18:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.084 ************************************ 00:04:36.084 END TEST env 00:04:36.084 ************************************ 00:04:36.084 16:18:06 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.084 16:18:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.084 16:18:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.084 16:18:06 -- common/autotest_common.sh@10 -- # set +x 00:04:36.343 ************************************ 00:04:36.343 START TEST rpc 00:04:36.343 ************************************ 00:04:36.343 16:18:06 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.343 * Looking for test storage... 00:04:36.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.343 16:18:06 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.343 16:18:06 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.343 16:18:06 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.343 16:18:06 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.343 16:18:06 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.343 16:18:06 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.343 16:18:06 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.343 16:18:06 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.343 16:18:06 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.343 16:18:06 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.343 16:18:06 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.343 16:18:06 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.343 16:18:06 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.343 16:18:06 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.343 16:18:06 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.343 16:18:06 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.343 16:18:06 rpc -- scripts/common.sh@345 -- # : 1 00:04:36.343 16:18:06 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.343 16:18:06 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.343 16:18:06 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.343 16:18:06 rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.343 16:18:06 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.343 16:18:06 rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.343 16:18:06 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.343 16:18:06 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.343 16:18:06 rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.344 16:18:06 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.344 16:18:06 rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.344 16:18:06 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.344 16:18:06 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.344 16:18:06 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.344 16:18:06 rpc -- scripts/common.sh@368 -- # return 0 00:04:36.344 16:18:06 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.344 16:18:06 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.344 --rc genhtml_branch_coverage=1 00:04:36.344 --rc genhtml_function_coverage=1 00:04:36.344 --rc genhtml_legend=1 00:04:36.344 --rc geninfo_all_blocks=1 00:04:36.344 --rc geninfo_unexecuted_blocks=1 00:04:36.344 00:04:36.344 ' 00:04:36.344 16:18:06 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.344 --rc genhtml_branch_coverage=1 00:04:36.344 --rc genhtml_function_coverage=1 00:04:36.344 --rc genhtml_legend=1 00:04:36.344 --rc geninfo_all_blocks=1 00:04:36.344 --rc geninfo_unexecuted_blocks=1 00:04:36.344 00:04:36.344 ' 00:04:36.344 16:18:06 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.344 --rc genhtml_branch_coverage=1 00:04:36.344 --rc genhtml_function_coverage=1 00:04:36.344 --rc genhtml_legend=1 00:04:36.344 --rc geninfo_all_blocks=1 00:04:36.344 --rc geninfo_unexecuted_blocks=1 00:04:36.344 00:04:36.344 ' 00:04:36.344 16:18:06 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.344 --rc genhtml_branch_coverage=1 00:04:36.344 --rc genhtml_function_coverage=1 00:04:36.344 --rc genhtml_legend=1 00:04:36.344 --rc geninfo_all_blocks=1 00:04:36.344 --rc geninfo_unexecuted_blocks=1 00:04:36.344 00:04:36.344 ' 00:04:36.344 16:18:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=772974 00:04:36.344 16:18:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.344 16:18:06 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:36.344 16:18:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 772974 00:04:36.344 16:18:06 rpc -- common/autotest_common.sh@835 -- # '[' -z 772974 ']' 00:04:36.344 16:18:06 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.344 16:18:06 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.344 16:18:06 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.344 16:18:06 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.344 16:18:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.344 [2024-12-14 16:18:06.425408] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:36.344 [2024-12-14 16:18:06.425456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772974 ] 00:04:36.602 [2024-12-14 16:18:06.497699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.602 [2024-12-14 16:18:06.519177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.602 [2024-12-14 16:18:06.519215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 772974' to capture a snapshot of events at runtime. 00:04:36.602 [2024-12-14 16:18:06.519222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:36.602 [2024-12-14 16:18:06.519227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:36.602 [2024-12-14 16:18:06.519232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid772974 for offline analysis/debug. 00:04:36.602 [2024-12-14 16:18:06.519751] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.862 16:18:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.862 16:18:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:36.862 16:18:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.862 16:18:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.862 16:18:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:36.862 16:18:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:36.862 16:18:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.862 16:18:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.862 16:18:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.862 ************************************ 00:04:36.862 START TEST rpc_integrity 00:04:36.862 ************************************ 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.862 { 00:04:36.862 "name": "Malloc0", 00:04:36.862 "aliases": [ 00:04:36.862 "e84f955a-8c20-4593-b116-3c410ea4f5d0" 00:04:36.862 ], 00:04:36.862 "product_name": "Malloc disk", 00:04:36.862 "block_size": 512, 00:04:36.862 "num_blocks": 16384, 00:04:36.862 "uuid": "e84f955a-8c20-4593-b116-3c410ea4f5d0", 00:04:36.862 "assigned_rate_limits": { 00:04:36.862 "rw_ios_per_sec": 0, 00:04:36.862 "rw_mbytes_per_sec": 0, 00:04:36.862 "r_mbytes_per_sec": 0, 00:04:36.862 "w_mbytes_per_sec": 0 00:04:36.862 }, 00:04:36.862 "claimed": false, 00:04:36.862 "zoned": false, 00:04:36.862 "supported_io_types": { 00:04:36.862 "read": true, 00:04:36.862 "write": true, 00:04:36.862 "unmap": true, 00:04:36.862 "flush": true, 00:04:36.862 "reset": true, 00:04:36.862 "nvme_admin": false, 00:04:36.862 "nvme_io": false, 00:04:36.862 "nvme_io_md": false, 00:04:36.862 "write_zeroes": true, 00:04:36.862 "zcopy": true, 00:04:36.862 "get_zone_info": false, 00:04:36.862 "zone_management": false, 00:04:36.862 "zone_append": false, 00:04:36.862 "compare": false, 00:04:36.862 "compare_and_write": false, 00:04:36.862 "abort": true, 00:04:36.862 "seek_hole": false, 00:04:36.862 "seek_data": false, 00:04:36.862 "copy": true, 00:04:36.862 "nvme_iov_md": false 00:04:36.862 }, 00:04:36.862 "memory_domains": [ 00:04:36.862 { 00:04:36.862 "dma_device_id": "system", 00:04:36.862 "dma_device_type": 1 00:04:36.862 }, 00:04:36.862 { 00:04:36.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.862 "dma_device_type": 2 00:04:36.862 } 00:04:36.862 ], 00:04:36.862 "driver_specific": {} 00:04:36.862 } 00:04:36.862 ]' 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.862 [2024-12-14 16:18:06.892131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:36.862 [2024-12-14 16:18:06.892164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.862 [2024-12-14 16:18:06.892175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa7eae0 00:04:36.862 [2024-12-14 16:18:06.892183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.862 [2024-12-14 16:18:06.893282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.862 [2024-12-14 16:18:06.893302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.862 Passthru0 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.862 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.862 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.862 { 00:04:36.862 "name": "Malloc0", 00:04:36.862 "aliases": [ 00:04:36.862 "e84f955a-8c20-4593-b116-3c410ea4f5d0" 00:04:36.862 ], 00:04:36.862 "product_name": "Malloc disk", 00:04:36.862 "block_size": 512, 00:04:36.862 "num_blocks": 16384, 00:04:36.862 "uuid": "e84f955a-8c20-4593-b116-3c410ea4f5d0", 00:04:36.862 "assigned_rate_limits": { 00:04:36.862 "rw_ios_per_sec": 0, 00:04:36.862 "rw_mbytes_per_sec": 0, 00:04:36.862 "r_mbytes_per_sec": 0, 00:04:36.862 "w_mbytes_per_sec": 0 00:04:36.862 }, 00:04:36.862 "claimed": true, 00:04:36.862 "claim_type": "exclusive_write", 00:04:36.863 "zoned": false, 00:04:36.863 "supported_io_types": { 00:04:36.863 "read": true, 00:04:36.863 "write": true, 00:04:36.863 "unmap": true, 00:04:36.863 "flush": true, 00:04:36.863 "reset": true, 00:04:36.863 "nvme_admin": false, 00:04:36.863 "nvme_io": false, 00:04:36.863 "nvme_io_md": false, 00:04:36.863 "write_zeroes": true, 00:04:36.863 "zcopy": true, 00:04:36.863 "get_zone_info": false, 00:04:36.863 "zone_management": false, 00:04:36.863 "zone_append": false, 00:04:36.863 "compare": false, 00:04:36.863 "compare_and_write": false, 00:04:36.863 "abort": true, 00:04:36.863 "seek_hole": false, 00:04:36.863 "seek_data": false, 00:04:36.863 "copy": true, 00:04:36.863 "nvme_iov_md": false 00:04:36.863 }, 00:04:36.863 "memory_domains": [ 00:04:36.863 { 00:04:36.863 "dma_device_id": "system", 00:04:36.863 "dma_device_type": 1 00:04:36.863 }, 00:04:36.863 { 00:04:36.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.863 "dma_device_type": 2 00:04:36.863 } 00:04:36.863 ], 00:04:36.863 "driver_specific": {} 00:04:36.863 }, 00:04:36.863 { 00:04:36.863 "name": "Passthru0", 00:04:36.863 "aliases": [ 00:04:36.863 "1b775d07-8f0b-5d33-ad2b-454c08fb1a2d" 00:04:36.863 ], 00:04:36.863 "product_name": "passthru", 00:04:36.863 "block_size": 512, 00:04:36.863 "num_blocks": 16384, 00:04:36.863 "uuid": "1b775d07-8f0b-5d33-ad2b-454c08fb1a2d", 00:04:36.863 "assigned_rate_limits": { 00:04:36.863 "rw_ios_per_sec": 0, 00:04:36.863 "rw_mbytes_per_sec": 0, 00:04:36.863 "r_mbytes_per_sec": 0, 00:04:36.863 "w_mbytes_per_sec": 0 00:04:36.863 }, 00:04:36.863 "claimed": false, 00:04:36.863 "zoned": false, 00:04:36.863 "supported_io_types": { 00:04:36.863 "read": true, 00:04:36.863 "write": true, 00:04:36.863 "unmap": true, 00:04:36.863 "flush": true, 00:04:36.863 "reset": true, 00:04:36.863 "nvme_admin": false, 00:04:36.863 "nvme_io": false, 00:04:36.863 "nvme_io_md": false, 00:04:36.863 "write_zeroes": true, 00:04:36.863 "zcopy": true, 00:04:36.863 "get_zone_info": false, 00:04:36.863 "zone_management": false, 00:04:36.863 "zone_append": false, 00:04:36.863 "compare": false, 00:04:36.863 "compare_and_write": false, 00:04:36.863 "abort": true, 00:04:36.863 "seek_hole": false, 00:04:36.863 "seek_data": false, 00:04:36.863 "copy": true, 00:04:36.863 "nvme_iov_md": false 00:04:36.863 }, 00:04:36.863 "memory_domains": [ 00:04:36.863 { 00:04:36.863 "dma_device_id": "system", 00:04:36.863 "dma_device_type": 1 00:04:36.863 }, 00:04:36.863 { 00:04:36.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.863 "dma_device_type": 2 00:04:36.863 } 00:04:36.863 ], 00:04:36.863 "driver_specific": { 00:04:36.863 "passthru": { 00:04:36.863 "name": "Passthru0", 00:04:36.863 "base_bdev_name": "Malloc0" 00:04:36.863 } 00:04:36.863 } 00:04:36.863 } 00:04:36.863 ]' 00:04:36.863 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.121 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.121 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.121 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.121 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.121 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.121 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.121 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.121 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.122 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.122 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.122 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 16:18:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.122 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.122 16:18:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.122 16:18:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.122 00:04:37.122 real 0m0.279s 00:04:37.122 user 0m0.184s 00:04:37.122 sys 0m0.030s 00:04:37.122 16:18:07 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.122 16:18:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 ************************************ 00:04:37.122 END TEST rpc_integrity 00:04:37.122 ************************************ 00:04:37.122 16:18:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.122 16:18:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.122 16:18:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.122 16:18:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 ************************************ 00:04:37.122 START TEST rpc_plugins 00:04:37.122 ************************************ 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:37.122 16:18:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.122 16:18:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.122 16:18:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.122 16:18:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.122 { 00:04:37.122 "name": "Malloc1", 00:04:37.122 "aliases": [ 00:04:37.122 "82af99e8-62bb-40c3-81d5-80178b21c371" 00:04:37.122 ], 00:04:37.122 "product_name": "Malloc disk", 00:04:37.122 "block_size": 4096, 00:04:37.122 "num_blocks": 256, 00:04:37.122 "uuid": "82af99e8-62bb-40c3-81d5-80178b21c371", 00:04:37.122 "assigned_rate_limits": { 00:04:37.122 "rw_ios_per_sec": 0, 00:04:37.122 "rw_mbytes_per_sec": 0, 00:04:37.122 "r_mbytes_per_sec": 0, 00:04:37.122 "w_mbytes_per_sec": 0 00:04:37.122 }, 00:04:37.122 "claimed": false, 00:04:37.122 "zoned": false, 00:04:37.122 "supported_io_types": { 00:04:37.122 "read": true, 00:04:37.122 "write": true, 00:04:37.122 "unmap": true, 00:04:37.122 "flush": true, 00:04:37.122 "reset": true, 00:04:37.122 "nvme_admin": false, 00:04:37.122 "nvme_io": false, 00:04:37.122 "nvme_io_md": false, 00:04:37.122 "write_zeroes": true, 00:04:37.122 "zcopy": true, 00:04:37.122 "get_zone_info": false, 00:04:37.122 "zone_management": false, 00:04:37.122 "zone_append": false, 00:04:37.122 "compare": false, 00:04:37.122 "compare_and_write": false, 00:04:37.122 "abort": true, 00:04:37.122 "seek_hole": false, 00:04:37.122 "seek_data": false, 00:04:37.122 "copy": true, 00:04:37.122 "nvme_iov_md": false 00:04:37.122 }, 00:04:37.122 "memory_domains": [ 00:04:37.122 { 00:04:37.122 "dma_device_id": "system", 00:04:37.122 "dma_device_type": 1 00:04:37.122 }, 00:04:37.122 { 00:04:37.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.122 "dma_device_type": 2 00:04:37.122 } 00:04:37.122 ], 00:04:37.122 "driver_specific": {} 00:04:37.122 } 00:04:37.122 ]' 00:04:37.122 16:18:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:37.122 16:18:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.122 16:18:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.122 16:18:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.122 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.381 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.381 16:18:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.381 16:18:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:37.381 16:18:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.381 00:04:37.381 real 0m0.142s 00:04:37.381 user 0m0.086s 00:04:37.381 sys 0m0.016s 00:04:37.381 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.381 16:18:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.381 ************************************ 00:04:37.381 END TEST rpc_plugins 00:04:37.381 ************************************ 00:04:37.381 16:18:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.381 16:18:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.381 16:18:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.381 16:18:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.381 ************************************ 00:04:37.381 START TEST rpc_trace_cmd_test 00:04:37.381 ************************************ 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:37.381 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid772974", 00:04:37.381 "tpoint_group_mask": "0x8", 00:04:37.381 "iscsi_conn": { 00:04:37.381 "mask": "0x2", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "scsi": { 00:04:37.381 "mask": "0x4", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "bdev": { 00:04:37.381 "mask": "0x8", 00:04:37.381 "tpoint_mask": "0xffffffffffffffff" 00:04:37.381 }, 00:04:37.381 "nvmf_rdma": { 00:04:37.381 "mask": "0x10", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "nvmf_tcp": { 00:04:37.381 "mask": "0x20", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "ftl": { 00:04:37.381 "mask": "0x40", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "blobfs": { 00:04:37.381 "mask": "0x80", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "dsa": { 00:04:37.381 "mask": "0x200", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "thread": { 00:04:37.381 "mask": "0x400", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "nvme_pcie": { 00:04:37.381 "mask": "0x800", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "iaa": { 00:04:37.381 "mask": "0x1000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "nvme_tcp": { 00:04:37.381 "mask": "0x2000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "bdev_nvme": { 00:04:37.381 "mask": "0x4000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "sock": { 00:04:37.381 "mask": "0x8000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "blob": { 00:04:37.381 "mask": "0x10000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "bdev_raid": { 00:04:37.381 "mask": "0x20000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "scheduler": { 00:04:37.381 "mask": "0x40000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 } 00:04:37.381 }' 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:37.381 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:37.640 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:37.640 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:37.640 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:37.640 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:37.640 16:18:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:37.640 00:04:37.640 real 0m0.226s 00:04:37.640 user 0m0.196s 00:04:37.640 sys 0m0.023s 00:04:37.640 16:18:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.640 16:18:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.640 ************************************ 00:04:37.640 END TEST rpc_trace_cmd_test 00:04:37.640 ************************************ 00:04:37.640 16:18:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:37.640 16:18:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:37.640 16:18:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:37.640 16:18:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.640 16:18:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.640 16:18:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.640 ************************************ 00:04:37.640 START TEST rpc_daemon_integrity 00:04:37.640 ************************************ 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.640 { 00:04:37.640 "name": "Malloc2", 00:04:37.640 "aliases": [ 00:04:37.640 "44388e97-86de-4ff8-b8a7-b507e67eb7db" 00:04:37.640 ], 00:04:37.640 "product_name": "Malloc disk", 00:04:37.640 "block_size": 512, 00:04:37.640 "num_blocks": 16384, 00:04:37.640 "uuid": "44388e97-86de-4ff8-b8a7-b507e67eb7db", 00:04:37.640 "assigned_rate_limits": { 00:04:37.640 "rw_ios_per_sec": 0, 00:04:37.640 "rw_mbytes_per_sec": 0, 00:04:37.640 "r_mbytes_per_sec": 0, 00:04:37.640 "w_mbytes_per_sec": 0 00:04:37.640 }, 00:04:37.640 "claimed": false, 00:04:37.640 "zoned": false, 00:04:37.640 "supported_io_types": { 00:04:37.640 "read": true, 00:04:37.640 "write": true, 00:04:37.640 "unmap": true, 00:04:37.640 "flush": true, 00:04:37.640 "reset": true, 00:04:37.640 "nvme_admin": false, 00:04:37.640 "nvme_io": false, 00:04:37.640 "nvme_io_md": false, 00:04:37.640 "write_zeroes": true, 00:04:37.640 "zcopy": true, 00:04:37.640 "get_zone_info": false, 00:04:37.640 "zone_management": false, 00:04:37.640 "zone_append": false, 00:04:37.640 "compare": false, 00:04:37.640 "compare_and_write": false, 00:04:37.640 "abort": true, 00:04:37.640 "seek_hole": false, 00:04:37.640 "seek_data": false, 00:04:37.640 "copy": true, 00:04:37.640 "nvme_iov_md": false 00:04:37.640 }, 00:04:37.640 "memory_domains": [ 00:04:37.640 { 00:04:37.640 "dma_device_id": "system", 00:04:37.640 "dma_device_type": 1 00:04:37.640 }, 00:04:37.640 { 00:04:37.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.640 "dma_device_type": 2 00:04:37.640 } 00:04:37.640 ], 00:04:37.640 "driver_specific": {} 00:04:37.640 } 00:04:37.640 ]' 00:04:37.640 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.900 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.900 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:37.900 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.900 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.900 [2024-12-14 16:18:07.746441] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:37.900 [2024-12-14 16:18:07.746471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.900 [2024-12-14 16:18:07.746487] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x93cf80 00:04:37.900 [2024-12-14 16:18:07.746493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.900 [2024-12-14 16:18:07.747513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.900 [2024-12-14 16:18:07.747535] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.900 Passthru0 00:04:37.900 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.900 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.900 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.900 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.900 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.900 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.900 { 00:04:37.900 "name": "Malloc2", 00:04:37.900 "aliases": [ 00:04:37.900 "44388e97-86de-4ff8-b8a7-b507e67eb7db" 00:04:37.900 ], 00:04:37.900 "product_name": "Malloc disk", 00:04:37.900 "block_size": 512, 00:04:37.900 "num_blocks": 16384, 00:04:37.900 "uuid": "44388e97-86de-4ff8-b8a7-b507e67eb7db", 00:04:37.900 "assigned_rate_limits": { 00:04:37.900 "rw_ios_per_sec": 0, 00:04:37.900 "rw_mbytes_per_sec": 0, 00:04:37.900 "r_mbytes_per_sec": 0, 00:04:37.900 "w_mbytes_per_sec": 0 00:04:37.900 }, 00:04:37.900 "claimed": true, 00:04:37.900 "claim_type": "exclusive_write", 00:04:37.900 "zoned": false, 00:04:37.900 "supported_io_types": { 00:04:37.900 "read": true, 00:04:37.900 "write": true, 00:04:37.900 "unmap": true, 00:04:37.900 "flush": true, 00:04:37.900 "reset": true, 00:04:37.900 "nvme_admin": false, 00:04:37.900 "nvme_io": false, 00:04:37.900 "nvme_io_md": false, 00:04:37.900 "write_zeroes": true, 00:04:37.900 "zcopy": true, 00:04:37.900 "get_zone_info": false, 00:04:37.900 "zone_management": false, 00:04:37.900 "zone_append": false, 00:04:37.900 "compare": false, 00:04:37.900 "compare_and_write": false, 00:04:37.900 "abort": true, 00:04:37.900 "seek_hole": false, 00:04:37.900 "seek_data": false, 00:04:37.900 "copy": true, 00:04:37.900 "nvme_iov_md": false 00:04:37.900 }, 00:04:37.900 "memory_domains": [ 00:04:37.900 { 00:04:37.900 "dma_device_id": "system", 00:04:37.900 "dma_device_type": 1 00:04:37.900 }, 00:04:37.900 { 00:04:37.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.900 "dma_device_type": 2 00:04:37.900 } 00:04:37.900 ], 00:04:37.900 "driver_specific": {} 00:04:37.900 }, 00:04:37.900 { 00:04:37.900 "name": "Passthru0", 00:04:37.900 "aliases": [ 00:04:37.900 "5eb8e335-6a1a-51ae-8eaa-37c2f747509f" 00:04:37.900 ], 00:04:37.900 "product_name": "passthru", 00:04:37.900 "block_size": 512, 00:04:37.900 "num_blocks": 16384, 00:04:37.900 "uuid": "5eb8e335-6a1a-51ae-8eaa-37c2f747509f", 00:04:37.900 "assigned_rate_limits": { 00:04:37.900 "rw_ios_per_sec": 0, 00:04:37.900 "rw_mbytes_per_sec": 0, 00:04:37.900 "r_mbytes_per_sec": 0, 00:04:37.900 "w_mbytes_per_sec": 0 00:04:37.900 }, 00:04:37.900 "claimed": false, 00:04:37.900 "zoned": false, 00:04:37.900 "supported_io_types": { 00:04:37.900 "read": true, 00:04:37.900 "write": true, 00:04:37.900 "unmap": true, 00:04:37.900 "flush": true, 00:04:37.900 "reset": true, 00:04:37.900 "nvme_admin": false, 00:04:37.900 "nvme_io": false, 00:04:37.900 "nvme_io_md": false, 00:04:37.900 "write_zeroes": true, 00:04:37.900 "zcopy": true, 00:04:37.900 "get_zone_info": false, 00:04:37.900 "zone_management": false, 00:04:37.900 "zone_append": false, 00:04:37.900 "compare": false, 00:04:37.900 "compare_and_write": false, 00:04:37.900 "abort": true, 00:04:37.900 "seek_hole": false, 00:04:37.900 "seek_data": false, 00:04:37.900 "copy": true, 00:04:37.900 "nvme_iov_md": false 00:04:37.900 }, 00:04:37.900 "memory_domains": [ 00:04:37.900 { 00:04:37.900 "dma_device_id": "system", 00:04:37.900 "dma_device_type": 1 00:04:37.900 }, 00:04:37.900 { 00:04:37.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.900 "dma_device_type": 2 00:04:37.900 } 00:04:37.900 ], 00:04:37.900 "driver_specific": { 00:04:37.900 "passthru": { 00:04:37.900 "name": "Passthru0", 00:04:37.900 "base_bdev_name": "Malloc2" 00:04:37.900 } 00:04:37.900 } 00:04:37.900 } 00:04:37.901 ]' 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.901 00:04:37.901 real 0m0.273s 00:04:37.901 user 0m0.164s 00:04:37.901 sys 0m0.046s 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.901 16:18:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.901 ************************************ 00:04:37.901 END TEST rpc_daemon_integrity 00:04:37.901 ************************************ 00:04:37.901 16:18:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:37.901 16:18:07 rpc -- rpc/rpc.sh@84 -- # killprocess 772974 00:04:37.901 16:18:07 rpc -- common/autotest_common.sh@954 -- # '[' -z 772974 ']' 00:04:37.901 16:18:07 rpc -- common/autotest_common.sh@958 -- # kill -0 772974 00:04:37.901 16:18:07 rpc -- common/autotest_common.sh@959 -- # uname 00:04:37.901 16:18:07 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.901 16:18:07 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772974 00:04:37.901 16:18:07 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.901 16:18:07 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.901 16:18:07 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772974' 00:04:37.901 killing process with pid 772974 00:04:37.901 16:18:07 rpc -- common/autotest_common.sh@973 -- # kill 772974 00:04:37.901 16:18:07 rpc -- common/autotest_common.sh@978 -- # wait 772974 00:04:38.469 00:04:38.469 real 0m2.067s 00:04:38.469 user 0m2.636s 00:04:38.469 sys 0m0.700s 00:04:38.469 16:18:08 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.469 16:18:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.469 ************************************ 00:04:38.469 END TEST rpc 00:04:38.469 ************************************ 00:04:38.469 16:18:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:38.469 16:18:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.469 16:18:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.469 16:18:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.469 ************************************ 00:04:38.469 START TEST skip_rpc 00:04:38.469 ************************************ 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:38.469 * Looking for test storage... 00:04:38.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.469 16:18:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.469 --rc genhtml_branch_coverage=1 00:04:38.469 --rc genhtml_function_coverage=1 00:04:38.469 --rc genhtml_legend=1 00:04:38.469 --rc geninfo_all_blocks=1 00:04:38.469 --rc geninfo_unexecuted_blocks=1 00:04:38.469 00:04:38.469 ' 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.469 --rc genhtml_branch_coverage=1 00:04:38.469 --rc genhtml_function_coverage=1 00:04:38.469 --rc genhtml_legend=1 00:04:38.469 --rc geninfo_all_blocks=1 00:04:38.469 --rc geninfo_unexecuted_blocks=1 00:04:38.469 00:04:38.469 ' 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.469 --rc genhtml_branch_coverage=1 00:04:38.469 --rc genhtml_function_coverage=1 00:04:38.469 --rc genhtml_legend=1 00:04:38.469 --rc geninfo_all_blocks=1 00:04:38.469 --rc geninfo_unexecuted_blocks=1 00:04:38.469 00:04:38.469 ' 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.469 --rc genhtml_branch_coverage=1 00:04:38.469 --rc genhtml_function_coverage=1 00:04:38.469 --rc genhtml_legend=1 00:04:38.469 --rc geninfo_all_blocks=1 00:04:38.469 --rc geninfo_unexecuted_blocks=1 00:04:38.469 00:04:38.469 ' 00:04:38.469 16:18:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:38.469 16:18:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:38.469 16:18:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.469 16:18:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.469 ************************************ 00:04:38.469 START TEST skip_rpc 00:04:38.469 ************************************ 00:04:38.469 16:18:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:38.469 16:18:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=773820 00:04:38.469 16:18:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.469 16:18:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:38.469 16:18:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:38.728 [2024-12-14 16:18:08.591442] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:38.728 [2024-12-14 16:18:08.591478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773820 ] 00:04:38.728 [2024-12-14 16:18:08.664511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.728 [2024-12-14 16:18:08.686504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 773820 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 773820 ']' 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 773820 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773820 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773820' 00:04:43.997 killing process with pid 773820 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 773820 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 773820 00:04:43.997 00:04:43.997 real 0m5.355s 00:04:43.997 user 0m5.118s 00:04:43.997 sys 0m0.271s 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.997 16:18:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.997 ************************************ 00:04:43.997 END TEST skip_rpc 00:04:43.997 ************************************ 00:04:43.997 16:18:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:43.997 16:18:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.997 16:18:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.997 16:18:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.997 ************************************ 00:04:43.997 START TEST skip_rpc_with_json 00:04:43.997 ************************************ 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=774742 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 774742 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 774742 ']' 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.997 16:18:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.998 [2024-12-14 16:18:14.015922] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:43.998 [2024-12-14 16:18:14.015962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774742 ] 00:04:44.256 [2024-12-14 16:18:14.091318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.256 [2024-12-14 16:18:14.114173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.256 [2024-12-14 16:18:14.323388] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:44.256 request: 00:04:44.256 { 00:04:44.256 "trtype": "tcp", 00:04:44.256 "method": "nvmf_get_transports", 00:04:44.256 "req_id": 1 00:04:44.256 } 00:04:44.256 Got JSON-RPC error response 00:04:44.256 response: 00:04:44.256 { 00:04:44.256 "code": -19, 00:04:44.256 "message": "No such device" 00:04:44.256 } 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.256 [2024-12-14 16:18:14.335490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.256 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.515 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.515 16:18:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:44.515 { 00:04:44.515 "subsystems": [ 00:04:44.515 { 00:04:44.515 "subsystem": "fsdev", 00:04:44.515 "config": [ 00:04:44.515 { 00:04:44.515 "method": "fsdev_set_opts", 00:04:44.515 "params": { 00:04:44.515 "fsdev_io_pool_size": 65535, 00:04:44.515 "fsdev_io_cache_size": 256 00:04:44.515 } 00:04:44.515 } 00:04:44.515 ] 00:04:44.515 }, 00:04:44.515 { 00:04:44.515 "subsystem": "vfio_user_target", 00:04:44.515 "config": null 00:04:44.515 }, 00:04:44.515 { 00:04:44.515 "subsystem": "keyring", 00:04:44.515 "config": [] 00:04:44.515 }, 00:04:44.515 { 00:04:44.515 "subsystem": "iobuf", 00:04:44.515 "config": [ 00:04:44.515 { 00:04:44.515 "method": "iobuf_set_options", 00:04:44.515 "params": { 00:04:44.515 "small_pool_count": 8192, 00:04:44.515 "large_pool_count": 1024, 00:04:44.515 "small_bufsize": 8192, 00:04:44.515 "large_bufsize": 135168, 00:04:44.515 "enable_numa": false 00:04:44.515 } 00:04:44.515 } 00:04:44.515 ] 00:04:44.515 }, 00:04:44.515 { 00:04:44.515 "subsystem": "sock", 00:04:44.515 "config": [ 00:04:44.515 { 00:04:44.515 "method": "sock_set_default_impl", 00:04:44.515 "params": { 00:04:44.515 "impl_name": "posix" 00:04:44.515 } 00:04:44.515 }, 00:04:44.515 { 00:04:44.515 "method": "sock_impl_set_options", 00:04:44.515 "params": { 00:04:44.515 "impl_name": "ssl", 00:04:44.515 "recv_buf_size": 4096, 00:04:44.515 "send_buf_size": 4096, 00:04:44.515 "enable_recv_pipe": true, 00:04:44.516 "enable_quickack": false, 00:04:44.516 "enable_placement_id": 0, 00:04:44.516 "enable_zerocopy_send_server": true, 00:04:44.516 "enable_zerocopy_send_client": false, 00:04:44.516 "zerocopy_threshold": 0, 00:04:44.516 "tls_version": 0, 00:04:44.516 "enable_ktls": false 00:04:44.516 } 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "method": "sock_impl_set_options", 00:04:44.516 "params": { 00:04:44.516 "impl_name": "posix", 00:04:44.516 "recv_buf_size": 2097152, 00:04:44.516 "send_buf_size": 2097152, 00:04:44.516 "enable_recv_pipe": true, 00:04:44.516 "enable_quickack": false, 00:04:44.516 "enable_placement_id": 0, 00:04:44.516 "enable_zerocopy_send_server": true, 00:04:44.516 "enable_zerocopy_send_client": false, 00:04:44.516 "zerocopy_threshold": 0, 00:04:44.516 "tls_version": 0, 00:04:44.516 "enable_ktls": false 00:04:44.516 } 00:04:44.516 } 00:04:44.516 ] 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "subsystem": "vmd", 00:04:44.516 "config": [] 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "subsystem": "accel", 00:04:44.516 "config": [ 00:04:44.516 { 00:04:44.516 "method": "accel_set_options", 00:04:44.516 "params": { 00:04:44.516 "small_cache_size": 128, 00:04:44.516 "large_cache_size": 16, 00:04:44.516 "task_count": 2048, 00:04:44.516 "sequence_count": 2048, 00:04:44.516 "buf_count": 2048 00:04:44.516 } 00:04:44.516 } 00:04:44.516 ] 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "subsystem": "bdev", 00:04:44.516 "config": [ 00:04:44.516 { 00:04:44.516 "method": "bdev_set_options", 00:04:44.516 "params": { 00:04:44.516 "bdev_io_pool_size": 65535, 00:04:44.516 "bdev_io_cache_size": 256, 00:04:44.516 "bdev_auto_examine": true, 00:04:44.516 "iobuf_small_cache_size": 128, 00:04:44.516 "iobuf_large_cache_size": 16 00:04:44.516 } 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "method": "bdev_raid_set_options", 00:04:44.516 "params": { 00:04:44.516 "process_window_size_kb": 1024, 00:04:44.516 "process_max_bandwidth_mb_sec": 0 00:04:44.516 } 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "method": "bdev_iscsi_set_options", 00:04:44.516 "params": { 00:04:44.516 "timeout_sec": 30 00:04:44.516 } 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "method": "bdev_nvme_set_options", 00:04:44.516 "params": { 00:04:44.516 "action_on_timeout": "none", 00:04:44.516 "timeout_us": 0, 00:04:44.516 "timeout_admin_us": 0, 00:04:44.516 "keep_alive_timeout_ms": 10000, 00:04:44.516 "arbitration_burst": 0, 00:04:44.516 "low_priority_weight": 0, 00:04:44.516 "medium_priority_weight": 0, 00:04:44.516 "high_priority_weight": 0, 00:04:44.516 "nvme_adminq_poll_period_us": 10000, 00:04:44.516 "nvme_ioq_poll_period_us": 0, 00:04:44.516 "io_queue_requests": 0, 00:04:44.516 "delay_cmd_submit": true, 00:04:44.516 "transport_retry_count": 4, 00:04:44.516 "bdev_retry_count": 3, 00:04:44.516 "transport_ack_timeout": 0, 00:04:44.516 "ctrlr_loss_timeout_sec": 0, 00:04:44.516 "reconnect_delay_sec": 0, 00:04:44.516 "fast_io_fail_timeout_sec": 0, 00:04:44.516 "disable_auto_failback": false, 00:04:44.516 "generate_uuids": false, 00:04:44.516 "transport_tos": 0, 00:04:44.516 "nvme_error_stat": false, 00:04:44.516 "rdma_srq_size": 0, 00:04:44.516 "io_path_stat": false, 00:04:44.516 "allow_accel_sequence": false, 00:04:44.516 "rdma_max_cq_size": 0, 00:04:44.516 "rdma_cm_event_timeout_ms": 0, 00:04:44.516 "dhchap_digests": [ 00:04:44.516 "sha256", 00:04:44.516 "sha384", 00:04:44.516 "sha512" 00:04:44.516 ], 00:04:44.516 "dhchap_dhgroups": [ 00:04:44.516 "null", 00:04:44.516 "ffdhe2048", 00:04:44.516 "ffdhe3072", 00:04:44.516 "ffdhe4096", 00:04:44.516 "ffdhe6144", 00:04:44.516 "ffdhe8192" 00:04:44.516 ], 00:04:44.516 "rdma_umr_per_io": false 00:04:44.516 } 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "method": "bdev_nvme_set_hotplug", 00:04:44.516 "params": { 00:04:44.516 "period_us": 100000, 00:04:44.516 "enable": false 00:04:44.516 } 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "method": "bdev_wait_for_examine" 00:04:44.516 } 00:04:44.516 ] 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "subsystem": "scsi", 00:04:44.516 "config": null 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "subsystem": "scheduler", 00:04:44.516 "config": [ 00:04:44.516 { 00:04:44.516 "method": "framework_set_scheduler", 00:04:44.516 "params": { 00:04:44.516 "name": "static" 00:04:44.516 } 00:04:44.516 } 00:04:44.516 ] 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "subsystem": "vhost_scsi", 00:04:44.516 "config": [] 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "subsystem": "vhost_blk", 00:04:44.516 "config": [] 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "subsystem": "ublk", 00:04:44.516 "config": [] 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "subsystem": "nbd", 00:04:44.516 "config": [] 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "subsystem": "nvmf", 00:04:44.516 "config": [ 00:04:44.516 { 00:04:44.516 "method": "nvmf_set_config", 00:04:44.516 "params": { 00:04:44.516 "discovery_filter": "match_any", 00:04:44.516 "admin_cmd_passthru": { 00:04:44.516 "identify_ctrlr": false 00:04:44.516 }, 00:04:44.516 "dhchap_digests": [ 00:04:44.516 "sha256", 00:04:44.516 "sha384", 00:04:44.516 "sha512" 00:04:44.516 ], 00:04:44.516 "dhchap_dhgroups": [ 00:04:44.516 "null", 00:04:44.516 "ffdhe2048", 00:04:44.516 "ffdhe3072", 00:04:44.516 "ffdhe4096", 00:04:44.516 "ffdhe6144", 00:04:44.516 "ffdhe8192" 00:04:44.516 ] 00:04:44.516 } 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "method": "nvmf_set_max_subsystems", 00:04:44.516 "params": { 00:04:44.516 "max_subsystems": 1024 00:04:44.516 } 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "method": "nvmf_set_crdt", 00:04:44.516 "params": { 00:04:44.516 "crdt1": 0, 00:04:44.516 "crdt2": 0, 00:04:44.516 "crdt3": 0 00:04:44.516 } 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "method": "nvmf_create_transport", 00:04:44.516 "params": { 00:04:44.516 "trtype": "TCP", 00:04:44.516 "max_queue_depth": 128, 00:04:44.516 "max_io_qpairs_per_ctrlr": 127, 00:04:44.516 "in_capsule_data_size": 4096, 00:04:44.516 "max_io_size": 131072, 00:04:44.516 "io_unit_size": 131072, 00:04:44.516 "max_aq_depth": 128, 00:04:44.516 "num_shared_buffers": 511, 00:04:44.516 "buf_cache_size": 4294967295, 00:04:44.516 "dif_insert_or_strip": false, 00:04:44.516 "zcopy": false, 00:04:44.516 "c2h_success": true, 00:04:44.516 "sock_priority": 0, 00:04:44.516 "abort_timeout_sec": 1, 00:04:44.516 "ack_timeout": 0, 00:04:44.516 "data_wr_pool_size": 0 00:04:44.516 } 00:04:44.516 } 00:04:44.516 ] 00:04:44.516 }, 00:04:44.516 { 00:04:44.516 "subsystem": "iscsi", 00:04:44.516 "config": [ 00:04:44.516 { 00:04:44.516 "method": "iscsi_set_options", 00:04:44.516 "params": { 00:04:44.516 "node_base": "iqn.2016-06.io.spdk", 00:04:44.516 "max_sessions": 128, 00:04:44.516 "max_connections_per_session": 2, 00:04:44.516 "max_queue_depth": 64, 00:04:44.516 "default_time2wait": 2, 00:04:44.516 "default_time2retain": 20, 00:04:44.516 "first_burst_length": 8192, 00:04:44.516 "immediate_data": true, 00:04:44.516 "allow_duplicated_isid": false, 00:04:44.516 "error_recovery_level": 0, 00:04:44.516 "nop_timeout": 60, 00:04:44.516 "nop_in_interval": 30, 00:04:44.516 "disable_chap": false, 00:04:44.516 "require_chap": false, 00:04:44.516 "mutual_chap": false, 00:04:44.516 "chap_group": 0, 00:04:44.516 "max_large_datain_per_connection": 64, 00:04:44.516 "max_r2t_per_connection": 4, 00:04:44.516 "pdu_pool_size": 36864, 00:04:44.516 "immediate_data_pool_size": 16384, 00:04:44.516 "data_out_pool_size": 2048 00:04:44.516 } 00:04:44.516 } 00:04:44.516 ] 00:04:44.516 } 00:04:44.516 ] 00:04:44.516 } 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 774742 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 774742 ']' 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 774742 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774742 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774742' 00:04:44.516 killing process with pid 774742 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 774742 00:04:44.516 16:18:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 774742 00:04:44.776 16:18:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=774852 00:04:44.776 16:18:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:44.776 16:18:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:50.046 16:18:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 774852 00:04:50.046 16:18:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 774852 ']' 00:04:50.046 16:18:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 774852 00:04:50.046 16:18:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:50.046 16:18:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.046 16:18:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774852 00:04:50.046 16:18:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.046 16:18:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.046 16:18:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774852' 00:04:50.046 killing process with pid 774852 00:04:50.046 16:18:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 774852 00:04:50.046 16:18:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 774852 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:50.305 00:04:50.305 real 0m6.240s 00:04:50.305 user 0m5.925s 00:04:50.305 sys 0m0.602s 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.305 ************************************ 00:04:50.305 END TEST skip_rpc_with_json 00:04:50.305 ************************************ 00:04:50.305 16:18:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:50.305 16:18:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.305 16:18:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.305 16:18:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.305 ************************************ 00:04:50.305 START TEST skip_rpc_with_delay 00:04:50.305 ************************************ 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.305 [2024-12-14 16:18:20.333661] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.305 00:04:50.305 real 0m0.070s 00:04:50.305 user 0m0.046s 00:04:50.305 sys 0m0.023s 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.305 16:18:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:50.305 ************************************ 00:04:50.305 END TEST skip_rpc_with_delay 00:04:50.305 ************************************ 00:04:50.305 16:18:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:50.305 16:18:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:50.305 16:18:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:50.305 16:18:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.305 16:18:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.305 16:18:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.564 ************************************ 00:04:50.564 START TEST exit_on_failed_rpc_init 00:04:50.564 ************************************ 00:04:50.565 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:50.565 16:18:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=775875 00:04:50.565 16:18:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 775875 00:04:50.565 16:18:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.565 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 775875 ']' 00:04:50.565 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.565 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.565 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.565 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.565 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.565 [2024-12-14 16:18:20.471524] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:50.565 [2024-12-14 16:18:20.471571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775875 ] 00:04:50.565 [2024-12-14 16:18:20.547851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.565 [2024-12-14 16:18:20.570594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.824 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.824 [2024-12-14 16:18:20.829554] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:50.824 [2024-12-14 16:18:20.829605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775932 ] 00:04:50.824 [2024-12-14 16:18:20.903964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.083 [2024-12-14 16:18:20.925978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.083 [2024-12-14 16:18:20.926028] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:51.083 [2024-12-14 16:18:20.926053] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:51.083 [2024-12-14 16:18:20.926059] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 775875 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 775875 ']' 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 775875 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.083 16:18:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775875 00:04:51.083 16:18:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.083 16:18:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.083 16:18:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775875' 00:04:51.083 killing process with pid 775875 00:04:51.083 16:18:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 775875 00:04:51.083 16:18:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 775875 00:04:51.343 00:04:51.343 real 0m0.885s 00:04:51.343 user 0m0.917s 00:04:51.343 sys 0m0.392s 00:04:51.343 16:18:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.343 16:18:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.343 ************************************ 00:04:51.343 END TEST exit_on_failed_rpc_init 00:04:51.343 ************************************ 00:04:51.343 16:18:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.343 00:04:51.343 real 0m13.012s 00:04:51.343 user 0m12.215s 00:04:51.343 sys 0m1.573s 00:04:51.343 16:18:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.343 16:18:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.343 ************************************ 00:04:51.343 END TEST skip_rpc 00:04:51.343 ************************************ 00:04:51.343 16:18:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.343 16:18:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.343 16:18:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.343 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.343 ************************************ 00:04:51.343 START TEST rpc_client 00:04:51.343 ************************************ 00:04:51.343 16:18:21 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.602 * Looking for test storage... 00:04:51.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:51.602 16:18:21 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.602 16:18:21 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.602 16:18:21 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.602 16:18:21 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.602 16:18:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:51.602 16:18:21 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.602 16:18:21 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.602 --rc genhtml_branch_coverage=1 00:04:51.602 --rc genhtml_function_coverage=1 00:04:51.602 --rc genhtml_legend=1 00:04:51.602 --rc geninfo_all_blocks=1 00:04:51.602 --rc geninfo_unexecuted_blocks=1 00:04:51.602 00:04:51.602 ' 00:04:51.602 16:18:21 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.602 --rc genhtml_branch_coverage=1 00:04:51.602 --rc genhtml_function_coverage=1 00:04:51.602 --rc genhtml_legend=1 00:04:51.602 --rc geninfo_all_blocks=1 00:04:51.602 --rc geninfo_unexecuted_blocks=1 00:04:51.603 00:04:51.603 ' 00:04:51.603 16:18:21 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.603 --rc genhtml_branch_coverage=1 00:04:51.603 --rc genhtml_function_coverage=1 00:04:51.603 --rc genhtml_legend=1 00:04:51.603 --rc geninfo_all_blocks=1 00:04:51.603 --rc geninfo_unexecuted_blocks=1 00:04:51.603 00:04:51.603 ' 00:04:51.603 16:18:21 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.603 --rc genhtml_branch_coverage=1 00:04:51.603 --rc genhtml_function_coverage=1 00:04:51.603 --rc genhtml_legend=1 00:04:51.603 --rc geninfo_all_blocks=1 00:04:51.603 --rc geninfo_unexecuted_blocks=1 00:04:51.603 00:04:51.603 ' 00:04:51.603 16:18:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:51.603 OK 00:04:51.603 16:18:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:51.603 00:04:51.603 real 0m0.202s 00:04:51.603 user 0m0.120s 00:04:51.603 sys 0m0.095s 00:04:51.603 16:18:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.603 16:18:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:51.603 ************************************ 00:04:51.603 END TEST rpc_client 00:04:51.603 ************************************ 00:04:51.603 16:18:21 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.603 16:18:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.603 16:18:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.603 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.603 ************************************ 00:04:51.603 START TEST json_config 00:04:51.603 ************************************ 00:04:51.603 16:18:21 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.862 16:18:21 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.862 16:18:21 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.862 16:18:21 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.862 16:18:21 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.862 16:18:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.863 16:18:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.863 16:18:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.863 16:18:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.863 16:18:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.863 16:18:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.863 16:18:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.863 16:18:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.863 16:18:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.863 16:18:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.863 16:18:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.863 16:18:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:51.863 16:18:21 json_config -- scripts/common.sh@345 -- # : 1 00:04:51.863 16:18:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.863 16:18:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.863 16:18:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:51.863 16:18:21 json_config -- scripts/common.sh@353 -- # local d=1 00:04:51.863 16:18:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.863 16:18:21 json_config -- scripts/common.sh@355 -- # echo 1 00:04:51.863 16:18:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.863 16:18:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:51.863 16:18:21 json_config -- scripts/common.sh@353 -- # local d=2 00:04:51.863 16:18:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.863 16:18:21 json_config -- scripts/common.sh@355 -- # echo 2 00:04:51.863 16:18:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.863 16:18:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.863 16:18:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.863 16:18:21 json_config -- scripts/common.sh@368 -- # return 0 00:04:51.863 16:18:21 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.863 16:18:21 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.863 --rc genhtml_branch_coverage=1 00:04:51.863 --rc genhtml_function_coverage=1 00:04:51.863 --rc genhtml_legend=1 00:04:51.863 --rc geninfo_all_blocks=1 00:04:51.863 --rc geninfo_unexecuted_blocks=1 00:04:51.863 00:04:51.863 ' 00:04:51.863 16:18:21 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.863 --rc genhtml_branch_coverage=1 00:04:51.863 --rc genhtml_function_coverage=1 00:04:51.863 --rc genhtml_legend=1 00:04:51.863 --rc geninfo_all_blocks=1 00:04:51.863 --rc geninfo_unexecuted_blocks=1 00:04:51.863 00:04:51.863 ' 00:04:51.863 16:18:21 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.863 --rc genhtml_branch_coverage=1 00:04:51.863 --rc genhtml_function_coverage=1 00:04:51.863 --rc genhtml_legend=1 00:04:51.863 --rc geninfo_all_blocks=1 00:04:51.863 --rc geninfo_unexecuted_blocks=1 00:04:51.863 00:04:51.863 ' 00:04:51.863 16:18:21 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.863 --rc genhtml_branch_coverage=1 00:04:51.863 --rc genhtml_function_coverage=1 00:04:51.863 --rc genhtml_legend=1 00:04:51.863 --rc geninfo_all_blocks=1 00:04:51.863 --rc geninfo_unexecuted_blocks=1 00:04:51.863 00:04:51.863 ' 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.863 16:18:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.863 16:18:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.863 16:18:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.863 16:18:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.863 16:18:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.863 16:18:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.863 16:18:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.863 16:18:21 json_config -- paths/export.sh@5 -- # export PATH 00:04:51.863 16:18:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@51 -- # : 0 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.863 16:18:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:51.863 INFO: JSON configuration test init 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:51.863 16:18:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.863 16:18:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:51.863 16:18:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.863 16:18:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.863 16:18:21 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:51.863 16:18:21 json_config -- json_config/common.sh@9 -- # local app=target 00:04:51.863 16:18:21 json_config -- json_config/common.sh@10 -- # shift 00:04:51.863 16:18:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.863 16:18:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.863 16:18:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.863 16:18:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.863 16:18:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.863 16:18:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=776278 00:04:51.863 16:18:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.864 Waiting for target to run... 00:04:51.864 16:18:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:51.864 16:18:21 json_config -- json_config/common.sh@25 -- # waitforlisten 776278 /var/tmp/spdk_tgt.sock 00:04:51.864 16:18:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 776278 ']' 00:04:51.864 16:18:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.864 16:18:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.864 16:18:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.864 16:18:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.864 16:18:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.864 [2024-12-14 16:18:21.931658] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:51.864 [2024-12-14 16:18:21.931707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776278 ] 00:04:52.431 [2024-12-14 16:18:22.388372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.431 [2024-12-14 16:18:22.410515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.690 16:18:22 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.690 16:18:22 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:52.690 16:18:22 json_config -- json_config/common.sh@26 -- # echo '' 00:04:52.690 00:04:52.690 16:18:22 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:52.690 16:18:22 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:52.690 16:18:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.690 16:18:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.690 16:18:22 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:52.690 16:18:22 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:52.690 16:18:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.690 16:18:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.949 16:18:22 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:52.949 16:18:22 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:52.949 16:18:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:56.236 16:18:25 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:56.236 16:18:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:56.236 16:18:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.236 16:18:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.236 16:18:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:56.236 16:18:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:56.236 16:18:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:56.236 16:18:25 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:56.236 16:18:25 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:56.236 16:18:25 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:56.236 16:18:25 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:56.236 16:18:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@54 -- # sort 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:56.236 16:18:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.236 16:18:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:56.236 16:18:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.236 16:18:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:56.236 16:18:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:56.236 16:18:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:56.236 MallocForNvmf0 00:04:56.495 16:18:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:56.495 16:18:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:56.495 MallocForNvmf1 00:04:56.495 16:18:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:56.495 16:18:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:56.753 [2024-12-14 16:18:26.674499] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.753 16:18:26 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.753 16:18:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:57.012 16:18:26 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:57.012 16:18:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:57.012 16:18:27 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:57.012 16:18:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:57.270 16:18:27 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:57.270 16:18:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:57.529 [2024-12-14 16:18:27.424763] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:57.529 16:18:27 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:57.529 16:18:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.529 16:18:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.529 16:18:27 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:57.529 16:18:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.529 16:18:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.529 16:18:27 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:57.529 16:18:27 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:57.529 16:18:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:57.788 MallocBdevForConfigChangeCheck 00:04:57.788 16:18:27 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:57.788 16:18:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.788 16:18:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.788 16:18:27 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:57.788 16:18:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.046 16:18:28 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:58.046 INFO: shutting down applications... 00:04:58.046 16:18:28 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:58.046 16:18:28 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:58.046 16:18:28 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:58.046 16:18:28 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:59.949 Calling clear_iscsi_subsystem 00:04:59.949 Calling clear_nvmf_subsystem 00:04:59.949 Calling clear_nbd_subsystem 00:04:59.949 Calling clear_ublk_subsystem 00:04:59.949 Calling clear_vhost_blk_subsystem 00:04:59.949 Calling clear_vhost_scsi_subsystem 00:04:59.949 Calling clear_bdev_subsystem 00:04:59.949 16:18:29 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:59.949 16:18:29 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:59.949 16:18:29 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:59.949 16:18:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.949 16:18:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:59.949 16:18:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:59.949 16:18:29 json_config -- json_config/json_config.sh@352 -- # break 00:04:59.949 16:18:29 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:59.949 16:18:29 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:59.949 16:18:29 json_config -- json_config/common.sh@31 -- # local app=target 00:04:59.949 16:18:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.949 16:18:29 json_config -- json_config/common.sh@35 -- # [[ -n 776278 ]] 00:04:59.949 16:18:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 776278 00:04:59.949 16:18:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.949 16:18:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.949 16:18:29 json_config -- json_config/common.sh@41 -- # kill -0 776278 00:04:59.949 16:18:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.517 16:18:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.517 16:18:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.517 16:18:30 json_config -- json_config/common.sh@41 -- # kill -0 776278 00:05:00.517 16:18:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.517 16:18:30 json_config -- json_config/common.sh@43 -- # break 00:05:00.517 16:18:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.517 16:18:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.517 SPDK target shutdown done 00:05:00.517 16:18:30 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:00.517 INFO: relaunching applications... 00:05:00.517 16:18:30 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.517 16:18:30 json_config -- json_config/common.sh@9 -- # local app=target 00:05:00.517 16:18:30 json_config -- json_config/common.sh@10 -- # shift 00:05:00.517 16:18:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.517 16:18:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.517 16:18:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.517 16:18:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.517 16:18:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.517 16:18:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.517 16:18:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=777754 00:05:00.517 16:18:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.517 Waiting for target to run... 00:05:00.517 16:18:30 json_config -- json_config/common.sh@25 -- # waitforlisten 777754 /var/tmp/spdk_tgt.sock 00:05:00.517 16:18:30 json_config -- common/autotest_common.sh@835 -- # '[' -z 777754 ']' 00:05:00.517 16:18:30 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.517 16:18:30 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.517 16:18:30 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.517 16:18:30 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.517 16:18:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.517 [2024-12-14 16:18:30.531889] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:00.517 [2024-12-14 16:18:30.531946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777754 ] 00:05:01.084 [2024-12-14 16:18:30.991697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.084 [2024-12-14 16:18:31.012312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.372 [2024-12-14 16:18:34.016410] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.372 [2024-12-14 16:18:34.048678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.939 16:18:34 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.939 16:18:34 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:04.939 16:18:34 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.939 00:05:04.939 16:18:34 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:04.939 16:18:34 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:04.939 INFO: Checking if target configuration is the same... 00:05:04.939 16:18:34 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:04.939 16:18:34 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.939 16:18:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.939 + '[' 2 -ne 2 ']' 00:05:04.939 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:04.939 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:04.939 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.939 +++ basename /dev/fd/62 00:05:04.939 ++ mktemp /tmp/62.XXX 00:05:04.939 + tmp_file_1=/tmp/62.368 00:05:04.939 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.939 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:04.939 + tmp_file_2=/tmp/spdk_tgt_config.json.7Rl 00:05:04.939 + ret=0 00:05:04.939 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.199 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.199 + diff -u /tmp/62.368 /tmp/spdk_tgt_config.json.7Rl 00:05:05.199 + echo 'INFO: JSON config files are the same' 00:05:05.199 INFO: JSON config files are the same 00:05:05.199 + rm /tmp/62.368 /tmp/spdk_tgt_config.json.7Rl 00:05:05.199 + exit 0 00:05:05.200 16:18:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:05.200 16:18:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:05.200 INFO: changing configuration and checking if this can be detected... 00:05:05.200 16:18:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.200 16:18:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.464 16:18:35 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.464 16:18:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:05.464 16:18:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.464 + '[' 2 -ne 2 ']' 00:05:05.464 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:05.464 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:05.464 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.464 +++ basename /dev/fd/62 00:05:05.464 ++ mktemp /tmp/62.XXX 00:05:05.464 + tmp_file_1=/tmp/62.Z4t 00:05:05.464 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.464 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:05.464 + tmp_file_2=/tmp/spdk_tgt_config.json.OEE 00:05:05.464 + ret=0 00:05:05.464 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.722 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.722 + diff -u /tmp/62.Z4t /tmp/spdk_tgt_config.json.OEE 00:05:05.722 + ret=1 00:05:05.722 + echo '=== Start of file: /tmp/62.Z4t ===' 00:05:05.722 + cat /tmp/62.Z4t 00:05:05.722 + echo '=== End of file: /tmp/62.Z4t ===' 00:05:05.722 + echo '' 00:05:05.722 + echo '=== Start of file: /tmp/spdk_tgt_config.json.OEE ===' 00:05:05.722 + cat /tmp/spdk_tgt_config.json.OEE 00:05:05.722 + echo '=== End of file: /tmp/spdk_tgt_config.json.OEE ===' 00:05:05.722 + echo '' 00:05:05.722 + rm /tmp/62.Z4t /tmp/spdk_tgt_config.json.OEE 00:05:05.722 + exit 1 00:05:05.722 16:18:35 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:05.723 INFO: configuration change detected. 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:05.723 16:18:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.723 16:18:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@324 -- # [[ -n 777754 ]] 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:05.723 16:18:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.723 16:18:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:05.723 16:18:35 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:05.723 16:18:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.723 16:18:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.982 16:18:35 json_config -- json_config/json_config.sh@330 -- # killprocess 777754 00:05:05.982 16:18:35 json_config -- common/autotest_common.sh@954 -- # '[' -z 777754 ']' 00:05:05.982 16:18:35 json_config -- common/autotest_common.sh@958 -- # kill -0 777754 00:05:05.982 16:18:35 json_config -- common/autotest_common.sh@959 -- # uname 00:05:05.982 16:18:35 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.982 16:18:35 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777754 00:05:05.982 16:18:35 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.982 16:18:35 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.982 16:18:35 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777754' 00:05:05.982 killing process with pid 777754 00:05:05.982 16:18:35 json_config -- common/autotest_common.sh@973 -- # kill 777754 00:05:05.982 16:18:35 json_config -- common/autotest_common.sh@978 -- # wait 777754 00:05:07.358 16:18:37 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.358 16:18:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:07.358 16:18:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.358 16:18:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.358 16:18:37 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:07.358 16:18:37 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:07.358 INFO: Success 00:05:07.358 00:05:07.358 real 0m15.742s 00:05:07.358 user 0m16.767s 00:05:07.358 sys 0m2.098s 00:05:07.358 16:18:37 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.358 16:18:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.358 ************************************ 00:05:07.358 END TEST json_config 00:05:07.358 ************************************ 00:05:07.617 16:18:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:07.617 16:18:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.617 16:18:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.617 16:18:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.617 ************************************ 00:05:07.617 START TEST json_config_extra_key 00:05:07.617 ************************************ 00:05:07.617 16:18:37 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:07.617 16:18:37 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.617 16:18:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.617 16:18:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.617 16:18:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.617 16:18:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.617 16:18:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.617 16:18:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.617 16:18:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.617 16:18:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.617 16:18:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.617 16:18:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.617 16:18:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.617 16:18:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.617 16:18:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.617 16:18:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:07.618 16:18:37 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.618 16:18:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.618 --rc genhtml_branch_coverage=1 00:05:07.618 --rc genhtml_function_coverage=1 00:05:07.618 --rc genhtml_legend=1 00:05:07.618 --rc geninfo_all_blocks=1 00:05:07.618 --rc geninfo_unexecuted_blocks=1 00:05:07.618 00:05:07.618 ' 00:05:07.618 16:18:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.618 --rc genhtml_branch_coverage=1 00:05:07.618 --rc genhtml_function_coverage=1 00:05:07.618 --rc genhtml_legend=1 00:05:07.618 --rc geninfo_all_blocks=1 00:05:07.618 --rc geninfo_unexecuted_blocks=1 00:05:07.618 00:05:07.618 ' 00:05:07.618 16:18:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.618 --rc genhtml_branch_coverage=1 00:05:07.618 --rc genhtml_function_coverage=1 00:05:07.618 --rc genhtml_legend=1 00:05:07.618 --rc geninfo_all_blocks=1 00:05:07.618 --rc geninfo_unexecuted_blocks=1 00:05:07.618 00:05:07.618 ' 00:05:07.618 16:18:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.618 --rc genhtml_branch_coverage=1 00:05:07.618 --rc genhtml_function_coverage=1 00:05:07.618 --rc genhtml_legend=1 00:05:07.618 --rc geninfo_all_blocks=1 00:05:07.618 --rc geninfo_unexecuted_blocks=1 00:05:07.618 00:05:07.618 ' 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.618 16:18:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.618 16:18:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.618 16:18:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.618 16:18:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.618 16:18:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:07.618 16:18:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.618 16:18:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:07.618 INFO: launching applications... 00:05:07.618 16:18:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:07.618 16:18:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:07.618 16:18:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:07.618 16:18:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.618 16:18:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.618 16:18:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.618 16:18:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.618 16:18:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.618 16:18:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=779006 00:05:07.618 16:18:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.618 Waiting for target to run... 00:05:07.618 16:18:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 779006 /var/tmp/spdk_tgt.sock 00:05:07.618 16:18:37 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 779006 ']' 00:05:07.618 16:18:37 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:07.618 16:18:37 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.618 16:18:37 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.618 16:18:37 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.618 16:18:37 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.618 16:18:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.878 [2024-12-14 16:18:37.736085] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:07.878 [2024-12-14 16:18:37.736132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779006 ] 00:05:08.136 [2024-12-14 16:18:38.189843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.136 [2024-12-14 16:18:38.211136] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.704 16:18:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.704 16:18:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:08.704 16:18:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:08.704 00:05:08.704 16:18:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:08.704 INFO: shutting down applications... 00:05:08.704 16:18:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:08.704 16:18:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:08.704 16:18:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.704 16:18:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 779006 ]] 00:05:08.704 16:18:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 779006 00:05:08.704 16:18:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.704 16:18:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.704 16:18:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 779006 00:05:08.704 16:18:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.272 16:18:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.272 16:18:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.272 16:18:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 779006 00:05:09.272 16:18:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.272 16:18:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:09.272 16:18:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.272 16:18:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.272 SPDK target shutdown done 00:05:09.272 16:18:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:09.272 Success 00:05:09.272 00:05:09.272 real 0m1.579s 00:05:09.272 user 0m1.165s 00:05:09.272 sys 0m0.596s 00:05:09.272 16:18:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.272 16:18:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.272 ************************************ 00:05:09.272 END TEST json_config_extra_key 00:05:09.272 ************************************ 00:05:09.272 16:18:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:09.272 16:18:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.272 16:18:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.272 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:05:09.272 ************************************ 00:05:09.272 START TEST alias_rpc 00:05:09.272 ************************************ 00:05:09.272 16:18:39 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:09.272 * Looking for test storage... 00:05:09.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:09.272 16:18:39 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.272 16:18:39 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.272 16:18:39 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.272 16:18:39 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:09.272 16:18:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.273 16:18:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.273 16:18:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.273 16:18:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:09.273 16:18:39 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.273 16:18:39 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.273 --rc genhtml_branch_coverage=1 00:05:09.273 --rc genhtml_function_coverage=1 00:05:09.273 --rc genhtml_legend=1 00:05:09.273 --rc geninfo_all_blocks=1 00:05:09.273 --rc geninfo_unexecuted_blocks=1 00:05:09.273 00:05:09.273 ' 00:05:09.273 16:18:39 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.273 --rc genhtml_branch_coverage=1 00:05:09.273 --rc genhtml_function_coverage=1 00:05:09.273 --rc genhtml_legend=1 00:05:09.273 --rc geninfo_all_blocks=1 00:05:09.273 --rc geninfo_unexecuted_blocks=1 00:05:09.273 00:05:09.273 ' 00:05:09.273 16:18:39 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.273 --rc genhtml_branch_coverage=1 00:05:09.273 --rc genhtml_function_coverage=1 00:05:09.273 --rc genhtml_legend=1 00:05:09.273 --rc geninfo_all_blocks=1 00:05:09.273 --rc geninfo_unexecuted_blocks=1 00:05:09.273 00:05:09.273 ' 00:05:09.273 16:18:39 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.273 --rc genhtml_branch_coverage=1 00:05:09.273 --rc genhtml_function_coverage=1 00:05:09.273 --rc genhtml_legend=1 00:05:09.273 --rc geninfo_all_blocks=1 00:05:09.273 --rc geninfo_unexecuted_blocks=1 00:05:09.273 00:05:09.273 ' 00:05:09.273 16:18:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:09.273 16:18:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=779421 00:05:09.273 16:18:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.273 16:18:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 779421 00:05:09.273 16:18:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 779421 ']' 00:05:09.273 16:18:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.273 16:18:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.273 16:18:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.273 16:18:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.273 16:18:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.532 [2024-12-14 16:18:39.376234] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:09.532 [2024-12-14 16:18:39.376285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779421 ] 00:05:09.532 [2024-12-14 16:18:39.449799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.532 [2024-12-14 16:18:39.471560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.792 16:18:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.792 16:18:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:09.792 16:18:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:10.050 16:18:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 779421 00:05:10.050 16:18:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 779421 ']' 00:05:10.050 16:18:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 779421 00:05:10.050 16:18:39 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:10.050 16:18:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.050 16:18:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779421 00:05:10.050 16:18:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.050 16:18:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.050 16:18:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779421' 00:05:10.050 killing process with pid 779421 00:05:10.050 16:18:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 779421 00:05:10.050 16:18:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 779421 00:05:10.309 00:05:10.309 real 0m1.091s 00:05:10.309 user 0m1.119s 00:05:10.309 sys 0m0.417s 00:05:10.309 16:18:40 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.309 16:18:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.309 ************************************ 00:05:10.309 END TEST alias_rpc 00:05:10.309 ************************************ 00:05:10.309 16:18:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:10.309 16:18:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.309 16:18:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.309 16:18:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.309 16:18:40 -- common/autotest_common.sh@10 -- # set +x 00:05:10.309 ************************************ 00:05:10.309 START TEST spdkcli_tcp 00:05:10.309 ************************************ 00:05:10.309 16:18:40 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.309 * Looking for test storage... 00:05:10.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.568 16:18:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.568 --rc genhtml_branch_coverage=1 00:05:10.568 --rc genhtml_function_coverage=1 00:05:10.568 --rc genhtml_legend=1 00:05:10.568 --rc geninfo_all_blocks=1 00:05:10.568 --rc geninfo_unexecuted_blocks=1 00:05:10.568 00:05:10.568 ' 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.568 --rc genhtml_branch_coverage=1 00:05:10.568 --rc genhtml_function_coverage=1 00:05:10.568 --rc genhtml_legend=1 00:05:10.568 --rc geninfo_all_blocks=1 00:05:10.568 --rc geninfo_unexecuted_blocks=1 00:05:10.568 00:05:10.568 ' 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.568 --rc genhtml_branch_coverage=1 00:05:10.568 --rc genhtml_function_coverage=1 00:05:10.568 --rc genhtml_legend=1 00:05:10.568 --rc geninfo_all_blocks=1 00:05:10.568 --rc geninfo_unexecuted_blocks=1 00:05:10.568 00:05:10.568 ' 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.568 --rc genhtml_branch_coverage=1 00:05:10.568 --rc genhtml_function_coverage=1 00:05:10.568 --rc genhtml_legend=1 00:05:10.568 --rc geninfo_all_blocks=1 00:05:10.568 --rc geninfo_unexecuted_blocks=1 00:05:10.568 00:05:10.568 ' 00:05:10.568 16:18:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:10.568 16:18:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:10.568 16:18:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:10.568 16:18:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:10.568 16:18:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:10.568 16:18:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:10.568 16:18:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.568 16:18:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=779584 00:05:10.568 16:18:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 779584 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 779584 ']' 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.568 16:18:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.568 16:18:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.568 [2024-12-14 16:18:40.538030] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:10.568 [2024-12-14 16:18:40.538082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779584 ] 00:05:10.568 [2024-12-14 16:18:40.612827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.568 [2024-12-14 16:18:40.636156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.568 [2024-12-14 16:18:40.636157] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.827 16:18:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.827 16:18:40 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:10.827 16:18:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.827 16:18:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=779790 00:05:10.827 16:18:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.086 [ 00:05:11.086 "bdev_malloc_delete", 00:05:11.086 "bdev_malloc_create", 00:05:11.086 "bdev_null_resize", 00:05:11.086 "bdev_null_delete", 00:05:11.086 "bdev_null_create", 00:05:11.086 "bdev_nvme_cuse_unregister", 00:05:11.086 "bdev_nvme_cuse_register", 00:05:11.086 "bdev_opal_new_user", 00:05:11.086 "bdev_opal_set_lock_state", 00:05:11.086 "bdev_opal_delete", 00:05:11.086 "bdev_opal_get_info", 00:05:11.086 "bdev_opal_create", 00:05:11.086 "bdev_nvme_opal_revert", 00:05:11.086 "bdev_nvme_opal_init", 00:05:11.086 "bdev_nvme_send_cmd", 00:05:11.086 "bdev_nvme_set_keys", 00:05:11.086 "bdev_nvme_get_path_iostat", 00:05:11.086 "bdev_nvme_get_mdns_discovery_info", 00:05:11.086 "bdev_nvme_stop_mdns_discovery", 00:05:11.086 "bdev_nvme_start_mdns_discovery", 00:05:11.086 "bdev_nvme_set_multipath_policy", 00:05:11.086 "bdev_nvme_set_preferred_path", 00:05:11.086 "bdev_nvme_get_io_paths", 00:05:11.086 "bdev_nvme_remove_error_injection", 00:05:11.086 "bdev_nvme_add_error_injection", 00:05:11.086 "bdev_nvme_get_discovery_info", 00:05:11.086 "bdev_nvme_stop_discovery", 00:05:11.086 "bdev_nvme_start_discovery", 00:05:11.086 "bdev_nvme_get_controller_health_info", 00:05:11.086 "bdev_nvme_disable_controller", 00:05:11.086 "bdev_nvme_enable_controller", 00:05:11.086 "bdev_nvme_reset_controller", 00:05:11.086 "bdev_nvme_get_transport_statistics", 00:05:11.086 "bdev_nvme_apply_firmware", 00:05:11.086 "bdev_nvme_detach_controller", 00:05:11.086 "bdev_nvme_get_controllers", 00:05:11.086 "bdev_nvme_attach_controller", 00:05:11.086 "bdev_nvme_set_hotplug", 00:05:11.086 "bdev_nvme_set_options", 00:05:11.086 "bdev_passthru_delete", 00:05:11.086 "bdev_passthru_create", 00:05:11.086 "bdev_lvol_set_parent_bdev", 00:05:11.086 "bdev_lvol_set_parent", 00:05:11.086 "bdev_lvol_check_shallow_copy", 00:05:11.086 "bdev_lvol_start_shallow_copy", 00:05:11.086 "bdev_lvol_grow_lvstore", 00:05:11.086 "bdev_lvol_get_lvols", 00:05:11.086 "bdev_lvol_get_lvstores", 00:05:11.086 "bdev_lvol_delete", 00:05:11.086 "bdev_lvol_set_read_only", 00:05:11.086 "bdev_lvol_resize", 00:05:11.086 "bdev_lvol_decouple_parent", 00:05:11.086 "bdev_lvol_inflate", 00:05:11.086 "bdev_lvol_rename", 00:05:11.086 "bdev_lvol_clone_bdev", 00:05:11.086 "bdev_lvol_clone", 00:05:11.086 "bdev_lvol_snapshot", 00:05:11.086 "bdev_lvol_create", 00:05:11.086 "bdev_lvol_delete_lvstore", 00:05:11.086 "bdev_lvol_rename_lvstore", 00:05:11.086 "bdev_lvol_create_lvstore", 00:05:11.086 "bdev_raid_set_options", 00:05:11.086 "bdev_raid_remove_base_bdev", 00:05:11.086 "bdev_raid_add_base_bdev", 00:05:11.086 "bdev_raid_delete", 00:05:11.086 "bdev_raid_create", 00:05:11.086 "bdev_raid_get_bdevs", 00:05:11.086 "bdev_error_inject_error", 00:05:11.086 "bdev_error_delete", 00:05:11.086 "bdev_error_create", 00:05:11.086 "bdev_split_delete", 00:05:11.086 "bdev_split_create", 00:05:11.086 "bdev_delay_delete", 00:05:11.086 "bdev_delay_create", 00:05:11.086 "bdev_delay_update_latency", 00:05:11.086 "bdev_zone_block_delete", 00:05:11.086 "bdev_zone_block_create", 00:05:11.086 "blobfs_create", 00:05:11.086 "blobfs_detect", 00:05:11.086 "blobfs_set_cache_size", 00:05:11.086 "bdev_aio_delete", 00:05:11.086 "bdev_aio_rescan", 00:05:11.086 "bdev_aio_create", 00:05:11.086 "bdev_ftl_set_property", 00:05:11.086 "bdev_ftl_get_properties", 00:05:11.086 "bdev_ftl_get_stats", 00:05:11.086 "bdev_ftl_unmap", 00:05:11.086 "bdev_ftl_unload", 00:05:11.086 "bdev_ftl_delete", 00:05:11.086 "bdev_ftl_load", 00:05:11.086 "bdev_ftl_create", 00:05:11.086 "bdev_virtio_attach_controller", 00:05:11.086 "bdev_virtio_scsi_get_devices", 00:05:11.086 "bdev_virtio_detach_controller", 00:05:11.086 "bdev_virtio_blk_set_hotplug", 00:05:11.086 "bdev_iscsi_delete", 00:05:11.086 "bdev_iscsi_create", 00:05:11.086 "bdev_iscsi_set_options", 00:05:11.086 "accel_error_inject_error", 00:05:11.086 "ioat_scan_accel_module", 00:05:11.086 "dsa_scan_accel_module", 00:05:11.086 "iaa_scan_accel_module", 00:05:11.086 "vfu_virtio_create_fs_endpoint", 00:05:11.086 "vfu_virtio_create_scsi_endpoint", 00:05:11.086 "vfu_virtio_scsi_remove_target", 00:05:11.086 "vfu_virtio_scsi_add_target", 00:05:11.086 "vfu_virtio_create_blk_endpoint", 00:05:11.086 "vfu_virtio_delete_endpoint", 00:05:11.086 "keyring_file_remove_key", 00:05:11.086 "keyring_file_add_key", 00:05:11.086 "keyring_linux_set_options", 00:05:11.086 "fsdev_aio_delete", 00:05:11.087 "fsdev_aio_create", 00:05:11.087 "iscsi_get_histogram", 00:05:11.087 "iscsi_enable_histogram", 00:05:11.087 "iscsi_set_options", 00:05:11.087 "iscsi_get_auth_groups", 00:05:11.087 "iscsi_auth_group_remove_secret", 00:05:11.087 "iscsi_auth_group_add_secret", 00:05:11.087 "iscsi_delete_auth_group", 00:05:11.087 "iscsi_create_auth_group", 00:05:11.087 "iscsi_set_discovery_auth", 00:05:11.087 "iscsi_get_options", 00:05:11.087 "iscsi_target_node_request_logout", 00:05:11.087 "iscsi_target_node_set_redirect", 00:05:11.087 "iscsi_target_node_set_auth", 00:05:11.087 "iscsi_target_node_add_lun", 00:05:11.087 "iscsi_get_stats", 00:05:11.087 "iscsi_get_connections", 00:05:11.087 "iscsi_portal_group_set_auth", 00:05:11.087 "iscsi_start_portal_group", 00:05:11.087 "iscsi_delete_portal_group", 00:05:11.087 "iscsi_create_portal_group", 00:05:11.087 "iscsi_get_portal_groups", 00:05:11.087 "iscsi_delete_target_node", 00:05:11.087 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.087 "iscsi_target_node_add_pg_ig_maps", 00:05:11.087 "iscsi_create_target_node", 00:05:11.087 "iscsi_get_target_nodes", 00:05:11.087 "iscsi_delete_initiator_group", 00:05:11.087 "iscsi_initiator_group_remove_initiators", 00:05:11.087 "iscsi_initiator_group_add_initiators", 00:05:11.087 "iscsi_create_initiator_group", 00:05:11.087 "iscsi_get_initiator_groups", 00:05:11.087 "nvmf_set_crdt", 00:05:11.087 "nvmf_set_config", 00:05:11.087 "nvmf_set_max_subsystems", 00:05:11.087 "nvmf_stop_mdns_prr", 00:05:11.087 "nvmf_publish_mdns_prr", 00:05:11.087 "nvmf_subsystem_get_listeners", 00:05:11.087 "nvmf_subsystem_get_qpairs", 00:05:11.087 "nvmf_subsystem_get_controllers", 00:05:11.087 "nvmf_get_stats", 00:05:11.087 "nvmf_get_transports", 00:05:11.087 "nvmf_create_transport", 00:05:11.087 "nvmf_get_targets", 00:05:11.087 "nvmf_delete_target", 00:05:11.087 "nvmf_create_target", 00:05:11.087 "nvmf_subsystem_allow_any_host", 00:05:11.087 "nvmf_subsystem_set_keys", 00:05:11.087 "nvmf_subsystem_remove_host", 00:05:11.087 "nvmf_subsystem_add_host", 00:05:11.087 "nvmf_ns_remove_host", 00:05:11.087 "nvmf_ns_add_host", 00:05:11.087 "nvmf_subsystem_remove_ns", 00:05:11.087 "nvmf_subsystem_set_ns_ana_group", 00:05:11.087 "nvmf_subsystem_add_ns", 00:05:11.087 "nvmf_subsystem_listener_set_ana_state", 00:05:11.087 "nvmf_discovery_get_referrals", 00:05:11.087 "nvmf_discovery_remove_referral", 00:05:11.087 "nvmf_discovery_add_referral", 00:05:11.087 "nvmf_subsystem_remove_listener", 00:05:11.087 "nvmf_subsystem_add_listener", 00:05:11.087 "nvmf_delete_subsystem", 00:05:11.087 "nvmf_create_subsystem", 00:05:11.087 "nvmf_get_subsystems", 00:05:11.087 "env_dpdk_get_mem_stats", 00:05:11.087 "nbd_get_disks", 00:05:11.087 "nbd_stop_disk", 00:05:11.087 "nbd_start_disk", 00:05:11.087 "ublk_recover_disk", 00:05:11.087 "ublk_get_disks", 00:05:11.087 "ublk_stop_disk", 00:05:11.087 "ublk_start_disk", 00:05:11.087 "ublk_destroy_target", 00:05:11.087 "ublk_create_target", 00:05:11.087 "virtio_blk_create_transport", 00:05:11.087 "virtio_blk_get_transports", 00:05:11.087 "vhost_controller_set_coalescing", 00:05:11.087 "vhost_get_controllers", 00:05:11.087 "vhost_delete_controller", 00:05:11.087 "vhost_create_blk_controller", 00:05:11.087 "vhost_scsi_controller_remove_target", 00:05:11.087 "vhost_scsi_controller_add_target", 00:05:11.087 "vhost_start_scsi_controller", 00:05:11.087 "vhost_create_scsi_controller", 00:05:11.087 "thread_set_cpumask", 00:05:11.087 "scheduler_set_options", 00:05:11.087 "framework_get_governor", 00:05:11.087 "framework_get_scheduler", 00:05:11.087 "framework_set_scheduler", 00:05:11.087 "framework_get_reactors", 00:05:11.087 "thread_get_io_channels", 00:05:11.087 "thread_get_pollers", 00:05:11.087 "thread_get_stats", 00:05:11.087 "framework_monitor_context_switch", 00:05:11.087 "spdk_kill_instance", 00:05:11.087 "log_enable_timestamps", 00:05:11.087 "log_get_flags", 00:05:11.087 "log_clear_flag", 00:05:11.087 "log_set_flag", 00:05:11.087 "log_get_level", 00:05:11.087 "log_set_level", 00:05:11.087 "log_get_print_level", 00:05:11.087 "log_set_print_level", 00:05:11.087 "framework_enable_cpumask_locks", 00:05:11.087 "framework_disable_cpumask_locks", 00:05:11.087 "framework_wait_init", 00:05:11.087 "framework_start_init", 00:05:11.087 "scsi_get_devices", 00:05:11.087 "bdev_get_histogram", 00:05:11.087 "bdev_enable_histogram", 00:05:11.087 "bdev_set_qos_limit", 00:05:11.087 "bdev_set_qd_sampling_period", 00:05:11.087 "bdev_get_bdevs", 00:05:11.087 "bdev_reset_iostat", 00:05:11.087 "bdev_get_iostat", 00:05:11.087 "bdev_examine", 00:05:11.087 "bdev_wait_for_examine", 00:05:11.087 "bdev_set_options", 00:05:11.087 "accel_get_stats", 00:05:11.087 "accel_set_options", 00:05:11.087 "accel_set_driver", 00:05:11.087 "accel_crypto_key_destroy", 00:05:11.087 "accel_crypto_keys_get", 00:05:11.087 "accel_crypto_key_create", 00:05:11.087 "accel_assign_opc", 00:05:11.087 "accel_get_module_info", 00:05:11.087 "accel_get_opc_assignments", 00:05:11.087 "vmd_rescan", 00:05:11.087 "vmd_remove_device", 00:05:11.087 "vmd_enable", 00:05:11.087 "sock_get_default_impl", 00:05:11.087 "sock_set_default_impl", 00:05:11.087 "sock_impl_set_options", 00:05:11.087 "sock_impl_get_options", 00:05:11.087 "iobuf_get_stats", 00:05:11.087 "iobuf_set_options", 00:05:11.087 "keyring_get_keys", 00:05:11.087 "vfu_tgt_set_base_path", 00:05:11.087 "framework_get_pci_devices", 00:05:11.087 "framework_get_config", 00:05:11.087 "framework_get_subsystems", 00:05:11.087 "fsdev_set_opts", 00:05:11.087 "fsdev_get_opts", 00:05:11.087 "trace_get_info", 00:05:11.087 "trace_get_tpoint_group_mask", 00:05:11.087 "trace_disable_tpoint_group", 00:05:11.087 "trace_enable_tpoint_group", 00:05:11.087 "trace_clear_tpoint_mask", 00:05:11.087 "trace_set_tpoint_mask", 00:05:11.087 "notify_get_notifications", 00:05:11.087 "notify_get_types", 00:05:11.087 "spdk_get_version", 00:05:11.087 "rpc_get_methods" 00:05:11.087 ] 00:05:11.087 16:18:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.087 16:18:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.087 16:18:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 779584 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 779584 ']' 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 779584 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779584 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779584' 00:05:11.087 killing process with pid 779584 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 779584 00:05:11.087 16:18:41 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 779584 00:05:11.347 00:05:11.347 real 0m1.117s 00:05:11.347 user 0m1.888s 00:05:11.347 sys 0m0.442s 00:05:11.347 16:18:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.347 16:18:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.347 ************************************ 00:05:11.347 END TEST spdkcli_tcp 00:05:11.347 ************************************ 00:05:11.606 16:18:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.606 16:18:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.606 16:18:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.606 16:18:41 -- common/autotest_common.sh@10 -- # set +x 00:05:11.606 ************************************ 00:05:11.606 START TEST dpdk_mem_utility 00:05:11.606 ************************************ 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.606 * Looking for test storage... 00:05:11.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.606 16:18:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.606 --rc genhtml_branch_coverage=1 00:05:11.606 --rc genhtml_function_coverage=1 00:05:11.606 --rc genhtml_legend=1 00:05:11.606 --rc geninfo_all_blocks=1 00:05:11.606 --rc geninfo_unexecuted_blocks=1 00:05:11.606 00:05:11.606 ' 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.606 --rc genhtml_branch_coverage=1 00:05:11.606 --rc genhtml_function_coverage=1 00:05:11.606 --rc genhtml_legend=1 00:05:11.606 --rc geninfo_all_blocks=1 00:05:11.606 --rc geninfo_unexecuted_blocks=1 00:05:11.606 00:05:11.606 ' 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.606 --rc genhtml_branch_coverage=1 00:05:11.606 --rc genhtml_function_coverage=1 00:05:11.606 --rc genhtml_legend=1 00:05:11.606 --rc geninfo_all_blocks=1 00:05:11.606 --rc geninfo_unexecuted_blocks=1 00:05:11.606 00:05:11.606 ' 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.606 --rc genhtml_branch_coverage=1 00:05:11.606 --rc genhtml_function_coverage=1 00:05:11.606 --rc genhtml_legend=1 00:05:11.606 --rc geninfo_all_blocks=1 00:05:11.606 --rc geninfo_unexecuted_blocks=1 00:05:11.606 00:05:11.606 ' 00:05:11.606 16:18:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:11.606 16:18:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=779873 00:05:11.606 16:18:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 779873 00:05:11.606 16:18:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 779873 ']' 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.606 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.607 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.607 16:18:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.865 [2024-12-14 16:18:41.711770] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:11.865 [2024-12-14 16:18:41.711818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779873 ] 00:05:11.865 [2024-12-14 16:18:41.784204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.865 [2024-12-14 16:18:41.806010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.125 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.125 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:12.125 16:18:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:12.125 16:18:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:12.125 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.125 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.125 { 00:05:12.125 "filename": "/tmp/spdk_mem_dump.txt" 00:05:12.125 } 00:05:12.125 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.125 16:18:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:12.125 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:12.125 1 heaps totaling size 818.000000 MiB 00:05:12.125 size: 818.000000 MiB heap id: 0 00:05:12.125 end heaps---------- 00:05:12.125 9 mempools totaling size 603.782043 MiB 00:05:12.125 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:12.125 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:12.125 size: 100.555481 MiB name: bdev_io_779873 00:05:12.125 size: 50.003479 MiB name: msgpool_779873 00:05:12.125 size: 36.509338 MiB name: fsdev_io_779873 00:05:12.125 size: 21.763794 MiB name: PDU_Pool 00:05:12.125 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:12.125 size: 4.133484 MiB name: evtpool_779873 00:05:12.125 size: 0.026123 MiB name: Session_Pool 00:05:12.125 end mempools------- 00:05:12.125 6 memzones totaling size 4.142822 MiB 00:05:12.125 size: 1.000366 MiB name: RG_ring_0_779873 00:05:12.125 size: 1.000366 MiB name: RG_ring_1_779873 00:05:12.125 size: 1.000366 MiB name: RG_ring_4_779873 00:05:12.125 size: 1.000366 MiB name: RG_ring_5_779873 00:05:12.125 size: 0.125366 MiB name: RG_ring_2_779873 00:05:12.125 size: 0.015991 MiB name: RG_ring_3_779873 00:05:12.125 end memzones------- 00:05:12.125 16:18:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:12.125 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:12.125 list of free elements. size: 10.852478 MiB 00:05:12.125 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:12.125 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:12.125 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:12.125 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:12.125 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:12.125 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:12.125 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:12.125 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:12.125 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:12.125 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:12.125 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:12.125 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:12.125 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:12.125 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:12.125 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:12.125 list of standard malloc elements. size: 199.218628 MiB 00:05:12.125 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:12.125 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:12.125 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:12.125 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:12.125 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:12.125 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:12.125 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:12.125 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:12.125 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:12.125 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:12.125 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:12.125 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:12.125 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:12.125 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:12.125 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:12.125 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:12.125 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:12.125 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:12.125 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:12.125 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:12.125 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:12.125 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:12.125 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:12.125 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:12.125 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:12.125 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:12.125 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:12.125 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:12.125 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:12.125 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:12.126 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:12.126 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:12.126 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:12.126 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:12.126 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:12.126 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:12.126 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:12.126 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:12.126 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:12.126 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:12.126 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:12.126 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:12.126 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:12.126 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:12.126 list of memzone associated elements. size: 607.928894 MiB 00:05:12.126 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:12.126 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:12.126 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:12.126 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:12.126 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:12.126 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_779873_0 00:05:12.126 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:12.126 associated memzone info: size: 48.002930 MiB name: MP_msgpool_779873_0 00:05:12.126 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:12.126 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_779873_0 00:05:12.126 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:12.126 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:12.126 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:12.126 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:12.126 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:12.126 associated memzone info: size: 3.000122 MiB name: MP_evtpool_779873_0 00:05:12.126 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:12.126 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_779873 00:05:12.126 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:12.126 associated memzone info: size: 1.007996 MiB name: MP_evtpool_779873 00:05:12.126 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:12.126 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:12.126 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:12.126 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:12.126 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:12.126 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:12.126 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:12.126 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:12.126 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:12.126 associated memzone info: size: 1.000366 MiB name: RG_ring_0_779873 00:05:12.126 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:12.126 associated memzone info: size: 1.000366 MiB name: RG_ring_1_779873 00:05:12.126 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:12.126 associated memzone info: size: 1.000366 MiB name: RG_ring_4_779873 00:05:12.126 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:12.126 associated memzone info: size: 1.000366 MiB name: RG_ring_5_779873 00:05:12.126 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:12.126 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_779873 00:05:12.126 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:12.126 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_779873 00:05:12.126 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:12.126 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:12.126 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:12.126 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:12.126 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:12.126 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:12.126 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:12.126 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_779873 00:05:12.126 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:12.126 associated memzone info: size: 0.125366 MiB name: RG_ring_2_779873 00:05:12.126 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:12.126 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:12.126 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:12.126 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:12.126 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:12.126 associated memzone info: size: 0.015991 MiB name: RG_ring_3_779873 00:05:12.126 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:12.126 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:12.126 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:12.126 associated memzone info: size: 0.000183 MiB name: MP_msgpool_779873 00:05:12.126 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:12.126 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_779873 00:05:12.126 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:12.126 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_779873 00:05:12.126 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:12.126 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:12.126 16:18:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:12.126 16:18:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 779873 00:05:12.126 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 779873 ']' 00:05:12.126 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 779873 00:05:12.126 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:12.126 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.126 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779873 00:05:12.126 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.126 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.126 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779873' 00:05:12.126 killing process with pid 779873 00:05:12.126 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 779873 00:05:12.126 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 779873 00:05:12.385 00:05:12.385 real 0m0.971s 00:05:12.385 user 0m0.920s 00:05:12.385 sys 0m0.404s 00:05:12.385 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.385 16:18:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.385 ************************************ 00:05:12.385 END TEST dpdk_mem_utility 00:05:12.385 ************************************ 00:05:12.644 16:18:42 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:12.644 16:18:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.644 16:18:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.644 16:18:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.644 ************************************ 00:05:12.644 START TEST event 00:05:12.644 ************************************ 00:05:12.644 16:18:42 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:12.644 * Looking for test storage... 00:05:12.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:12.644 16:18:42 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.644 16:18:42 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.644 16:18:42 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.644 16:18:42 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.644 16:18:42 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.644 16:18:42 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.644 16:18:42 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.644 16:18:42 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.644 16:18:42 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.644 16:18:42 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.644 16:18:42 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.644 16:18:42 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.644 16:18:42 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.644 16:18:42 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.644 16:18:42 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.644 16:18:42 event -- scripts/common.sh@344 -- # case "$op" in 00:05:12.644 16:18:42 event -- scripts/common.sh@345 -- # : 1 00:05:12.644 16:18:42 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.644 16:18:42 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.644 16:18:42 event -- scripts/common.sh@365 -- # decimal 1 00:05:12.644 16:18:42 event -- scripts/common.sh@353 -- # local d=1 00:05:12.644 16:18:42 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.644 16:18:42 event -- scripts/common.sh@355 -- # echo 1 00:05:12.644 16:18:42 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.644 16:18:42 event -- scripts/common.sh@366 -- # decimal 2 00:05:12.644 16:18:42 event -- scripts/common.sh@353 -- # local d=2 00:05:12.644 16:18:42 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.644 16:18:42 event -- scripts/common.sh@355 -- # echo 2 00:05:12.644 16:18:42 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.644 16:18:42 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.644 16:18:42 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.644 16:18:42 event -- scripts/common.sh@368 -- # return 0 00:05:12.645 16:18:42 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.645 16:18:42 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.645 --rc genhtml_branch_coverage=1 00:05:12.645 --rc genhtml_function_coverage=1 00:05:12.645 --rc genhtml_legend=1 00:05:12.645 --rc geninfo_all_blocks=1 00:05:12.645 --rc geninfo_unexecuted_blocks=1 00:05:12.645 00:05:12.645 ' 00:05:12.645 16:18:42 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.645 --rc genhtml_branch_coverage=1 00:05:12.645 --rc genhtml_function_coverage=1 00:05:12.645 --rc genhtml_legend=1 00:05:12.645 --rc geninfo_all_blocks=1 00:05:12.645 --rc geninfo_unexecuted_blocks=1 00:05:12.645 00:05:12.645 ' 00:05:12.645 16:18:42 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.645 --rc genhtml_branch_coverage=1 00:05:12.645 --rc genhtml_function_coverage=1 00:05:12.645 --rc genhtml_legend=1 00:05:12.645 --rc geninfo_all_blocks=1 00:05:12.645 --rc geninfo_unexecuted_blocks=1 00:05:12.645 00:05:12.645 ' 00:05:12.645 16:18:42 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.645 --rc genhtml_branch_coverage=1 00:05:12.645 --rc genhtml_function_coverage=1 00:05:12.645 --rc genhtml_legend=1 00:05:12.645 --rc geninfo_all_blocks=1 00:05:12.645 --rc geninfo_unexecuted_blocks=1 00:05:12.645 00:05:12.645 ' 00:05:12.645 16:18:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:12.645 16:18:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:12.645 16:18:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.645 16:18:42 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:12.645 16:18:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.645 16:18:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.904 ************************************ 00:05:12.904 START TEST event_perf 00:05:12.904 ************************************ 00:05:12.904 16:18:42 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.904 Running I/O for 1 seconds...[2024-12-14 16:18:42.765318] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:12.904 [2024-12-14 16:18:42.765394] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780157 ] 00:05:12.904 [2024-12-14 16:18:42.845878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.904 [2024-12-14 16:18:42.871406] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.904 [2024-12-14 16:18:42.871515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.904 [2024-12-14 16:18:42.871620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.904 [2024-12-14 16:18:42.871621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.839 Running I/O for 1 seconds... 00:05:13.839 lcore 0: 209592 00:05:13.839 lcore 1: 209591 00:05:13.839 lcore 2: 209592 00:05:13.839 lcore 3: 209591 00:05:13.839 done. 00:05:13.839 00:05:13.839 real 0m1.162s 00:05:13.839 user 0m4.079s 00:05:13.839 sys 0m0.079s 00:05:13.839 16:18:43 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.839 16:18:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.839 ************************************ 00:05:13.839 END TEST event_perf 00:05:13.839 ************************************ 00:05:14.098 16:18:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:14.098 16:18:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:14.098 16:18:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.098 16:18:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.098 ************************************ 00:05:14.098 START TEST event_reactor 00:05:14.098 ************************************ 00:05:14.098 16:18:43 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:14.098 [2024-12-14 16:18:43.992992] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:14.098 [2024-12-14 16:18:43.993058] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780403 ] 00:05:14.098 [2024-12-14 16:18:44.069950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.098 [2024-12-14 16:18:44.091483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.474 test_start 00:05:15.474 oneshot 00:05:15.474 tick 100 00:05:15.474 tick 100 00:05:15.474 tick 250 00:05:15.474 tick 100 00:05:15.474 tick 100 00:05:15.474 tick 100 00:05:15.474 tick 250 00:05:15.474 tick 500 00:05:15.474 tick 100 00:05:15.474 tick 100 00:05:15.474 tick 250 00:05:15.474 tick 100 00:05:15.474 tick 100 00:05:15.474 test_end 00:05:15.474 00:05:15.474 real 0m1.149s 00:05:15.474 user 0m1.073s 00:05:15.474 sys 0m0.072s 00:05:15.474 16:18:45 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.474 16:18:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:15.474 ************************************ 00:05:15.474 END TEST event_reactor 00:05:15.474 ************************************ 00:05:15.474 16:18:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.474 16:18:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:15.474 16:18:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.474 16:18:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.474 ************************************ 00:05:15.474 START TEST event_reactor_perf 00:05:15.474 ************************************ 00:05:15.474 16:18:45 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.475 [2024-12-14 16:18:45.215317] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:15.475 [2024-12-14 16:18:45.215389] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780649 ] 00:05:15.475 [2024-12-14 16:18:45.294511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.475 [2024-12-14 16:18:45.316431] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.411 test_start 00:05:16.411 test_end 00:05:16.411 Performance: 512774 events per second 00:05:16.411 00:05:16.411 real 0m1.154s 00:05:16.411 user 0m1.067s 00:05:16.411 sys 0m0.082s 00:05:16.411 16:18:46 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.411 16:18:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.411 ************************************ 00:05:16.411 END TEST event_reactor_perf 00:05:16.411 ************************************ 00:05:16.411 16:18:46 event -- event/event.sh@49 -- # uname -s 00:05:16.411 16:18:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:16.411 16:18:46 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:16.411 16:18:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.411 16:18:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.411 16:18:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.411 ************************************ 00:05:16.411 START TEST event_scheduler 00:05:16.411 ************************************ 00:05:16.411 16:18:46 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:16.670 * Looking for test storage... 00:05:16.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:16.670 16:18:46 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:16.670 16:18:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:16.670 16:18:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:16.670 16:18:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.670 16:18:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:16.670 16:18:46 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.670 16:18:46 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:16.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.670 --rc genhtml_branch_coverage=1 00:05:16.670 --rc genhtml_function_coverage=1 00:05:16.670 --rc genhtml_legend=1 00:05:16.670 --rc geninfo_all_blocks=1 00:05:16.670 --rc geninfo_unexecuted_blocks=1 00:05:16.670 00:05:16.670 ' 00:05:16.670 16:18:46 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:16.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.670 --rc genhtml_branch_coverage=1 00:05:16.670 --rc genhtml_function_coverage=1 00:05:16.670 --rc genhtml_legend=1 00:05:16.670 --rc geninfo_all_blocks=1 00:05:16.670 --rc geninfo_unexecuted_blocks=1 00:05:16.670 00:05:16.670 ' 00:05:16.670 16:18:46 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:16.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.670 --rc genhtml_branch_coverage=1 00:05:16.670 --rc genhtml_function_coverage=1 00:05:16.670 --rc genhtml_legend=1 00:05:16.670 --rc geninfo_all_blocks=1 00:05:16.670 --rc geninfo_unexecuted_blocks=1 00:05:16.670 00:05:16.671 ' 00:05:16.671 16:18:46 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:16.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.671 --rc genhtml_branch_coverage=1 00:05:16.671 --rc genhtml_function_coverage=1 00:05:16.671 --rc genhtml_legend=1 00:05:16.671 --rc geninfo_all_blocks=1 00:05:16.671 --rc geninfo_unexecuted_blocks=1 00:05:16.671 00:05:16.671 ' 00:05:16.671 16:18:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:16.671 16:18:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=780931 00:05:16.671 16:18:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:16.671 16:18:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.671 16:18:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 780931 00:05:16.671 16:18:46 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 780931 ']' 00:05:16.671 16:18:46 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.671 16:18:46 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.671 16:18:46 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.671 16:18:46 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.671 16:18:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.671 [2024-12-14 16:18:46.646114] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:16.671 [2024-12-14 16:18:46.646162] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780931 ] 00:05:16.671 [2024-12-14 16:18:46.720798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.671 [2024-12-14 16:18:46.746315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.671 [2024-12-14 16:18:46.746426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.671 [2024-12-14 16:18:46.746513] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.671 [2024-12-14 16:18:46.746514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.931 16:18:46 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.931 16:18:46 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:16.931 16:18:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:16.931 16:18:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 [2024-12-14 16:18:46.803162] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:16.931 [2024-12-14 16:18:46.803180] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:16.931 [2024-12-14 16:18:46.803189] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:16.931 [2024-12-14 16:18:46.803194] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:16.931 [2024-12-14 16:18:46.803199] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:16.931 16:18:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:16.931 16:18:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 [2024-12-14 16:18:46.873251] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:16.931 16:18:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:16.931 16:18:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.931 16:18:46 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 ************************************ 00:05:16.931 START TEST scheduler_create_thread 00:05:16.931 ************************************ 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 2 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 3 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 4 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 5 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 6 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 7 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 8 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 9 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 10 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:16.931 16:18:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.931 16:18:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.931 16:18:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.931 16:18:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:16.931 16:18:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:16.932 16:18:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.932 16:18:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.499 16:18:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.499 16:18:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:17.499 16:18:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.499 16:18:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.874 16:18:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.874 16:18:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:18.874 16:18:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:18.874 16:18:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.874 16:18:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.250 16:18:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.250 00:05:20.250 real 0m3.103s 00:05:20.250 user 0m0.028s 00:05:20.250 sys 0m0.003s 00:05:20.250 16:18:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.250 16:18:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.250 ************************************ 00:05:20.250 END TEST scheduler_create_thread 00:05:20.250 ************************************ 00:05:20.250 16:18:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:20.250 16:18:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 780931 00:05:20.250 16:18:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 780931 ']' 00:05:20.250 16:18:50 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 780931 00:05:20.250 16:18:50 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:20.250 16:18:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.250 16:18:50 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 780931 00:05:20.250 16:18:50 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:20.250 16:18:50 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:20.250 16:18:50 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 780931' 00:05:20.250 killing process with pid 780931 00:05:20.250 16:18:50 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 780931 00:05:20.250 16:18:50 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 780931 00:05:20.508 [2024-12-14 16:18:50.392308] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:20.508 00:05:20.508 real 0m4.151s 00:05:20.508 user 0m6.705s 00:05:20.508 sys 0m0.348s 00:05:20.508 16:18:50 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.508 16:18:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.508 ************************************ 00:05:20.508 END TEST event_scheduler 00:05:20.508 ************************************ 00:05:20.767 16:18:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:20.767 16:18:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:20.767 16:18:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.767 16:18:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.767 16:18:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.767 ************************************ 00:05:20.767 START TEST app_repeat 00:05:20.767 ************************************ 00:05:20.767 16:18:50 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=781652 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 781652' 00:05:20.767 Process app_repeat pid: 781652 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:20.767 spdk_app_start Round 0 00:05:20.767 16:18:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781652 /var/tmp/spdk-nbd.sock 00:05:20.767 16:18:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781652 ']' 00:05:20.767 16:18:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.767 16:18:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.767 16:18:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.767 16:18:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.767 16:18:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.767 [2024-12-14 16:18:50.683031] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:20.767 [2024-12-14 16:18:50.683083] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781652 ] 00:05:20.767 [2024-12-14 16:18:50.755552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.767 [2024-12-14 16:18:50.778341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.767 [2024-12-14 16:18:50.778343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.026 16:18:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.026 16:18:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:21.026 16:18:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.026 Malloc0 00:05:21.026 16:18:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.285 Malloc1 00:05:21.285 16:18:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.285 16:18:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.544 /dev/nbd0 00:05:21.544 16:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.544 16:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.544 1+0 records in 00:05:21.544 1+0 records out 00:05:21.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188499 s, 21.7 MB/s 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.544 16:18:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.544 16:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.544 16:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.545 16:18:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.804 /dev/nbd1 00:05:21.804 16:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.804 16:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.804 1+0 records in 00:05:21.804 1+0 records out 00:05:21.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226695 s, 18.1 MB/s 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.804 16:18:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.804 16:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.804 16:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.804 16:18:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.804 16:18:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.804 16:18:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.064 { 00:05:22.064 "nbd_device": "/dev/nbd0", 00:05:22.064 "bdev_name": "Malloc0" 00:05:22.064 }, 00:05:22.064 { 00:05:22.064 "nbd_device": "/dev/nbd1", 00:05:22.064 "bdev_name": "Malloc1" 00:05:22.064 } 00:05:22.064 ]' 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.064 { 00:05:22.064 "nbd_device": "/dev/nbd0", 00:05:22.064 "bdev_name": "Malloc0" 00:05:22.064 }, 00:05:22.064 { 00:05:22.064 "nbd_device": "/dev/nbd1", 00:05:22.064 "bdev_name": "Malloc1" 00:05:22.064 } 00:05:22.064 ]' 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.064 /dev/nbd1' 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.064 /dev/nbd1' 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.064 256+0 records in 00:05:22.064 256+0 records out 00:05:22.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00989881 s, 106 MB/s 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.064 256+0 records in 00:05:22.064 256+0 records out 00:05:22.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143948 s, 72.8 MB/s 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.064 256+0 records in 00:05:22.064 256+0 records out 00:05:22.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147673 s, 71.0 MB/s 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.064 16:18:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.323 16:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.323 16:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.323 16:18:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.323 16:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.323 16:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.323 16:18:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.323 16:18:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.323 16:18:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.323 16:18:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.323 16:18:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.581 16:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.581 16:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.581 16:18:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.581 16:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.581 16:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.581 16:18:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.581 16:18:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.581 16:18:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.581 16:18:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.581 16:18:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.581 16:18:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.840 16:18:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.840 16:18:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.099 16:18:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.099 [2024-12-14 16:18:53.154507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.099 [2024-12-14 16:18:53.174314] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.099 [2024-12-14 16:18:53.174315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.358 [2024-12-14 16:18:53.214691] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.358 [2024-12-14 16:18:53.214729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.945 16:18:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.946 16:18:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:25.946 spdk_app_start Round 1 00:05:25.946 16:18:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781652 /var/tmp/spdk-nbd.sock 00:05:25.946 16:18:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781652 ']' 00:05:25.946 16:18:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.946 16:18:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.946 16:18:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.946 16:18:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.946 16:18:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.204 16:18:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.204 16:18:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.204 16:18:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.461 Malloc0 00:05:26.461 16:18:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.719 Malloc1 00:05:26.720 16:18:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.720 16:18:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.978 /dev/nbd0 00:05:26.979 16:18:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.979 16:18:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.979 1+0 records in 00:05:26.979 1+0 records out 00:05:26.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191277 s, 21.4 MB/s 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.979 16:18:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.979 16:18:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.979 16:18:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.979 16:18:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.237 /dev/nbd1 00:05:27.237 16:18:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.237 16:18:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.237 1+0 records in 00:05:27.237 1+0 records out 00:05:27.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241996 s, 16.9 MB/s 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.237 16:18:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.237 16:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.237 16:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.237 16:18:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.238 16:18:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.238 16:18:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.497 { 00:05:27.497 "nbd_device": "/dev/nbd0", 00:05:27.497 "bdev_name": "Malloc0" 00:05:27.497 }, 00:05:27.497 { 00:05:27.497 "nbd_device": "/dev/nbd1", 00:05:27.497 "bdev_name": "Malloc1" 00:05:27.497 } 00:05:27.497 ]' 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.497 { 00:05:27.497 "nbd_device": "/dev/nbd0", 00:05:27.497 "bdev_name": "Malloc0" 00:05:27.497 }, 00:05:27.497 { 00:05:27.497 "nbd_device": "/dev/nbd1", 00:05:27.497 "bdev_name": "Malloc1" 00:05:27.497 } 00:05:27.497 ]' 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.497 /dev/nbd1' 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.497 /dev/nbd1' 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.497 256+0 records in 00:05:27.497 256+0 records out 00:05:27.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102949 s, 102 MB/s 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.497 256+0 records in 00:05:27.497 256+0 records out 00:05:27.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137533 s, 76.2 MB/s 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.497 256+0 records in 00:05:27.497 256+0 records out 00:05:27.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148445 s, 70.6 MB/s 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.497 16:18:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.757 16:18:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.757 16:18:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.757 16:18:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.757 16:18:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.757 16:18:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.757 16:18:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.757 16:18:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.757 16:18:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.757 16:18:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.757 16:18:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.016 16:18:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.016 16:18:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.016 16:18:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.016 16:18:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.016 16:18:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.016 16:18:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.016 16:18:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.016 16:18:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.016 16:18:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.016 16:18:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.016 16:18:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.016 16:18:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.016 16:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.016 16:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.275 16:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.275 16:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.275 16:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.275 16:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.275 16:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.275 16:18:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.275 16:18:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.275 16:18:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.275 16:18:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.275 16:18:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.275 16:18:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.534 [2024-12-14 16:18:58.486323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.534 [2024-12-14 16:18:58.506260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.534 [2024-12-14 16:18:58.506262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.534 [2024-12-14 16:18:58.547419] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.534 [2024-12-14 16:18:58.547458] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.821 16:19:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.821 16:19:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:31.821 spdk_app_start Round 2 00:05:31.821 16:19:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781652 /var/tmp/spdk-nbd.sock 00:05:31.821 16:19:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781652 ']' 00:05:31.821 16:19:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.821 16:19:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.821 16:19:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.821 16:19:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.821 16:19:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.821 16:19:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.821 16:19:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:31.821 16:19:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.821 Malloc0 00:05:31.821 16:19:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.080 Malloc1 00:05:32.080 16:19:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.080 16:19:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.080 /dev/nbd0 00:05:32.339 16:19:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.339 16:19:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.339 1+0 records in 00:05:32.339 1+0 records out 00:05:32.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001214 s, 3.4 MB/s 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.339 16:19:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.339 16:19:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.339 16:19:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.339 16:19:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.339 /dev/nbd1 00:05:32.339 16:19:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.598 16:19:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.598 1+0 records in 00:05:32.598 1+0 records out 00:05:32.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189136 s, 21.7 MB/s 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.598 16:19:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.598 16:19:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.598 16:19:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.598 16:19:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.598 16:19:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.598 16:19:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.598 16:19:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.598 { 00:05:32.598 "nbd_device": "/dev/nbd0", 00:05:32.598 "bdev_name": "Malloc0" 00:05:32.599 }, 00:05:32.599 { 00:05:32.599 "nbd_device": "/dev/nbd1", 00:05:32.599 "bdev_name": "Malloc1" 00:05:32.599 } 00:05:32.599 ]' 00:05:32.599 16:19:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.599 { 00:05:32.599 "nbd_device": "/dev/nbd0", 00:05:32.599 "bdev_name": "Malloc0" 00:05:32.599 }, 00:05:32.599 { 00:05:32.599 "nbd_device": "/dev/nbd1", 00:05:32.599 "bdev_name": "Malloc1" 00:05:32.599 } 00:05:32.599 ]' 00:05:32.599 16:19:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.858 /dev/nbd1' 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.858 /dev/nbd1' 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.858 256+0 records in 00:05:32.858 256+0 records out 00:05:32.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102761 s, 102 MB/s 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.858 256+0 records in 00:05:32.858 256+0 records out 00:05:32.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139848 s, 75.0 MB/s 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.858 256+0 records in 00:05:32.858 256+0 records out 00:05:32.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145046 s, 72.3 MB/s 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.858 16:19:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.116 16:19:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.116 16:19:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.116 16:19:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.116 16:19:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.116 16:19:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.117 16:19:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.117 16:19:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.117 16:19:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.117 16:19:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.117 16:19:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.117 16:19:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.117 16:19:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.117 16:19:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.117 16:19:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.117 16:19:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.117 16:19:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.117 16:19:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.117 16:19:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.374 16:19:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.374 16:19:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.374 16:19:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.374 16:19:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.374 16:19:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.374 16:19:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.374 16:19:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.374 16:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.374 16:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.375 16:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.375 16:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.375 16:19:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.375 16:19:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.375 16:19:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.375 16:19:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.375 16:19:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.633 16:19:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.891 [2024-12-14 16:19:03.800454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.891 [2024-12-14 16:19:03.820395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.891 [2024-12-14 16:19:03.820396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.891 [2024-12-14 16:19:03.861440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.891 [2024-12-14 16:19:03.861480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.176 16:19:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 781652 /var/tmp/spdk-nbd.sock 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781652 ']' 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:37.176 16:19:06 event.app_repeat -- event/event.sh@39 -- # killprocess 781652 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 781652 ']' 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 781652 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 781652 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 781652' 00:05:37.176 killing process with pid 781652 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@973 -- # kill 781652 00:05:37.176 16:19:06 event.app_repeat -- common/autotest_common.sh@978 -- # wait 781652 00:05:37.176 spdk_app_start is called in Round 0. 00:05:37.176 Shutdown signal received, stop current app iteration 00:05:37.176 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:37.176 spdk_app_start is called in Round 1. 00:05:37.176 Shutdown signal received, stop current app iteration 00:05:37.176 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:37.176 spdk_app_start is called in Round 2. 00:05:37.176 Shutdown signal received, stop current app iteration 00:05:37.176 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:37.176 spdk_app_start is called in Round 3. 00:05:37.176 Shutdown signal received, stop current app iteration 00:05:37.176 16:19:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:37.176 16:19:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:37.176 00:05:37.176 real 0m16.405s 00:05:37.176 user 0m36.161s 00:05:37.176 sys 0m2.497s 00:05:37.176 16:19:07 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.176 16:19:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.176 ************************************ 00:05:37.176 END TEST app_repeat 00:05:37.176 ************************************ 00:05:37.176 16:19:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:37.176 16:19:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:37.176 16:19:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.176 16:19:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.176 16:19:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.176 ************************************ 00:05:37.176 START TEST cpu_locks 00:05:37.176 ************************************ 00:05:37.176 16:19:07 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:37.176 * Looking for test storage... 00:05:37.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:37.176 16:19:07 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.176 16:19:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.176 16:19:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.434 16:19:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.434 16:19:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.434 16:19:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.434 16:19:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.434 16:19:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.434 16:19:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.434 16:19:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.434 16:19:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.434 16:19:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.434 16:19:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.435 16:19:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:37.435 16:19:07 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.435 16:19:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.435 --rc genhtml_branch_coverage=1 00:05:37.435 --rc genhtml_function_coverage=1 00:05:37.435 --rc genhtml_legend=1 00:05:37.435 --rc geninfo_all_blocks=1 00:05:37.435 --rc geninfo_unexecuted_blocks=1 00:05:37.435 00:05:37.435 ' 00:05:37.435 16:19:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.435 --rc genhtml_branch_coverage=1 00:05:37.435 --rc genhtml_function_coverage=1 00:05:37.435 --rc genhtml_legend=1 00:05:37.435 --rc geninfo_all_blocks=1 00:05:37.435 --rc geninfo_unexecuted_blocks=1 00:05:37.435 00:05:37.435 ' 00:05:37.435 16:19:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.435 --rc genhtml_branch_coverage=1 00:05:37.435 --rc genhtml_function_coverage=1 00:05:37.435 --rc genhtml_legend=1 00:05:37.435 --rc geninfo_all_blocks=1 00:05:37.435 --rc geninfo_unexecuted_blocks=1 00:05:37.435 00:05:37.435 ' 00:05:37.435 16:19:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.435 --rc genhtml_branch_coverage=1 00:05:37.435 --rc genhtml_function_coverage=1 00:05:37.435 --rc genhtml_legend=1 00:05:37.435 --rc geninfo_all_blocks=1 00:05:37.435 --rc geninfo_unexecuted_blocks=1 00:05:37.435 00:05:37.435 ' 00:05:37.435 16:19:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:37.435 16:19:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:37.435 16:19:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:37.435 16:19:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:37.435 16:19:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.435 16:19:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.435 16:19:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.435 ************************************ 00:05:37.435 START TEST default_locks 00:05:37.435 ************************************ 00:05:37.435 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:37.435 16:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=784577 00:05:37.435 16:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 784577 00:05:37.435 16:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.435 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 784577 ']' 00:05:37.435 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.435 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.435 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.435 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.435 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.435 [2024-12-14 16:19:07.383616] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:37.435 [2024-12-14 16:19:07.383654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784577 ] 00:05:37.435 [2024-12-14 16:19:07.457980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.435 [2024-12-14 16:19:07.479703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.694 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.694 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:37.694 16:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 784577 00:05:37.694 16:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 784577 00:05:37.694 16:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.261 lslocks: write error 00:05:38.261 16:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 784577 00:05:38.261 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 784577 ']' 00:05:38.261 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 784577 00:05:38.261 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:38.261 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.261 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784577 00:05:38.261 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.261 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.261 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784577' 00:05:38.261 killing process with pid 784577 00:05:38.261 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 784577 00:05:38.261 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 784577 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 784577 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 784577 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 784577 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 784577 ']' 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (784577) - No such process 00:05:38.520 ERROR: process (pid: 784577) is no longer running 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.520 00:05:38.520 real 0m1.230s 00:05:38.520 user 0m1.185s 00:05:38.520 sys 0m0.573s 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.520 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.520 ************************************ 00:05:38.520 END TEST default_locks 00:05:38.520 ************************************ 00:05:38.520 16:19:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:38.520 16:19:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.520 16:19:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.520 16:19:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.807 ************************************ 00:05:38.807 START TEST default_locks_via_rpc 00:05:38.807 ************************************ 00:05:38.807 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:38.807 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.807 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=784846 00:05:38.807 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 784846 00:05:38.807 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 784846 ']' 00:05:38.807 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.807 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.807 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.807 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.807 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.807 [2024-12-14 16:19:08.667760] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:38.807 [2024-12-14 16:19:08.667799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784846 ] 00:05:38.807 [2024-12-14 16:19:08.724616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.807 [2024-12-14 16:19:08.747902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 784846 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 784846 00:05:39.099 16:19:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.437 16:19:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 784846 00:05:39.437 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 784846 ']' 00:05:39.437 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 784846 00:05:39.437 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:39.437 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.437 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784846 00:05:39.437 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.437 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.437 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784846' 00:05:39.437 killing process with pid 784846 00:05:39.437 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 784846 00:05:39.437 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 784846 00:05:40.006 00:05:40.006 real 0m1.158s 00:05:40.006 user 0m1.153s 00:05:40.006 sys 0m0.520s 00:05:40.006 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.006 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.006 ************************************ 00:05:40.006 END TEST default_locks_via_rpc 00:05:40.006 ************************************ 00:05:40.006 16:19:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:40.006 16:19:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.006 16:19:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.006 16:19:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.006 ************************************ 00:05:40.006 START TEST non_locking_app_on_locked_coremask 00:05:40.006 ************************************ 00:05:40.006 16:19:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:40.006 16:19:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=785108 00:05:40.006 16:19:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.006 16:19:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 785108 /var/tmp/spdk.sock 00:05:40.006 16:19:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785108 ']' 00:05:40.006 16:19:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.006 16:19:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.006 16:19:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.006 16:19:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.006 16:19:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.006 [2024-12-14 16:19:09.898629] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:40.006 [2024-12-14 16:19:09.898669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785108 ] 00:05:40.006 [2024-12-14 16:19:09.972601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.006 [2024-12-14 16:19:09.995551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.265 16:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.265 16:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:40.265 16:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=785114 00:05:40.265 16:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 785114 /var/tmp/spdk2.sock 00:05:40.265 16:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:40.265 16:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785114 ']' 00:05:40.265 16:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.265 16:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.265 16:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.265 16:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.265 16:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.265 [2024-12-14 16:19:10.249524] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:40.265 [2024-12-14 16:19:10.249574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785114 ] 00:05:40.265 [2024-12-14 16:19:10.338862] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.265 [2024-12-14 16:19:10.338892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.524 [2024-12-14 16:19:10.387247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.092 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.092 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.092 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 785108 00:05:41.092 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 785108 00:05:41.092 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.660 lslocks: write error 00:05:41.660 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 785108 00:05:41.660 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785108 ']' 00:05:41.660 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 785108 00:05:41.660 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:41.660 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.660 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785108 00:05:41.660 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.660 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.660 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785108' 00:05:41.660 killing process with pid 785108 00:05:41.660 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 785108 00:05:41.660 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 785108 00:05:42.229 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 785114 00:05:42.229 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785114 ']' 00:05:42.229 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 785114 00:05:42.229 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.229 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.229 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785114 00:05:42.229 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.229 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.229 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785114' 00:05:42.229 killing process with pid 785114 00:05:42.229 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 785114 00:05:42.229 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 785114 00:05:42.488 00:05:42.488 real 0m2.694s 00:05:42.488 user 0m2.833s 00:05:42.488 sys 0m0.917s 00:05:42.488 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.488 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.488 ************************************ 00:05:42.488 END TEST non_locking_app_on_locked_coremask 00:05:42.488 ************************************ 00:05:42.747 16:19:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:42.747 16:19:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.747 16:19:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.747 16:19:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.748 ************************************ 00:05:42.748 START TEST locking_app_on_unlocked_coremask 00:05:42.748 ************************************ 00:05:42.748 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:42.748 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=785590 00:05:42.748 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 785590 /var/tmp/spdk.sock 00:05:42.748 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:42.748 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785590 ']' 00:05:42.748 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.748 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.748 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.748 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.748 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.748 [2024-12-14 16:19:12.669862] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:42.748 [2024-12-14 16:19:12.669903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785590 ] 00:05:42.748 [2024-12-14 16:19:12.743915] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.748 [2024-12-14 16:19:12.743940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.748 [2024-12-14 16:19:12.764149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.007 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.007 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.007 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=785603 00:05:43.007 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 785603 /var/tmp/spdk2.sock 00:05:43.007 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.007 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785603 ']' 00:05:43.007 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.007 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.007 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.007 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.007 16:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.007 [2024-12-14 16:19:13.031019] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:43.007 [2024-12-14 16:19:13.031064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785603 ] 00:05:43.272 [2024-12-14 16:19:13.120927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.272 [2024-12-14 16:19:13.162942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.840 16:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.840 16:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.840 16:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 785603 00:05:43.840 16:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 785603 00:05:43.840 16:19:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.408 lslocks: write error 00:05:44.408 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 785590 00:05:44.408 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785590 ']' 00:05:44.408 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 785590 00:05:44.408 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.408 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.408 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785590 00:05:44.408 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.408 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.408 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785590' 00:05:44.408 killing process with pid 785590 00:05:44.408 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 785590 00:05:44.408 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 785590 00:05:44.977 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 785603 00:05:44.977 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785603 ']' 00:05:44.977 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 785603 00:05:44.977 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.977 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.977 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785603 00:05:44.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785603' 00:05:44.977 killing process with pid 785603 00:05:44.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 785603 00:05:44.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 785603 00:05:45.238 00:05:45.238 real 0m2.701s 00:05:45.238 user 0m2.827s 00:05:45.238 sys 0m0.938s 00:05:45.238 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.238 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.238 ************************************ 00:05:45.238 END TEST locking_app_on_unlocked_coremask 00:05:45.238 ************************************ 00:05:45.498 16:19:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:45.498 16:19:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.498 16:19:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.498 16:19:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.498 ************************************ 00:05:45.498 START TEST locking_app_on_locked_coremask 00:05:45.498 ************************************ 00:05:45.498 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:45.498 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=786081 00:05:45.498 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 786081 /var/tmp/spdk.sock 00:05:45.498 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.498 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 786081 ']' 00:05:45.498 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.498 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.498 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.498 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.498 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.498 [2024-12-14 16:19:15.441605] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:45.498 [2024-12-14 16:19:15.441647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786081 ] 00:05:45.498 [2024-12-14 16:19:15.512613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.498 [2024-12-14 16:19:15.533997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=786086 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 786086 /var/tmp/spdk2.sock 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 786086 /var/tmp/spdk2.sock 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 786086 /var/tmp/spdk2.sock 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 786086 ']' 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.757 16:19:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.757 [2024-12-14 16:19:15.794792] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:45.757 [2024-12-14 16:19:15.794838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786086 ] 00:05:46.016 [2024-12-14 16:19:15.880725] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 786081 has claimed it. 00:05:46.016 [2024-12-14 16:19:15.880765] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:46.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (786086) - No such process 00:05:46.584 ERROR: process (pid: 786086) is no longer running 00:05:46.584 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.584 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:46.584 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:46.584 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.584 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.584 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.584 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 786081 00:05:46.584 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 786081 00:05:46.584 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.151 lslocks: write error 00:05:47.151 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 786081 00:05:47.151 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 786081 ']' 00:05:47.151 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 786081 00:05:47.151 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.151 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.151 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786081 00:05:47.151 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.152 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.152 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786081' 00:05:47.152 killing process with pid 786081 00:05:47.152 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 786081 00:05:47.152 16:19:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 786081 00:05:47.411 00:05:47.411 real 0m1.893s 00:05:47.411 user 0m2.051s 00:05:47.411 sys 0m0.647s 00:05:47.411 16:19:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.411 16:19:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.411 ************************************ 00:05:47.411 END TEST locking_app_on_locked_coremask 00:05:47.411 ************************************ 00:05:47.411 16:19:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:47.411 16:19:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.411 16:19:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.411 16:19:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.411 ************************************ 00:05:47.411 START TEST locking_overlapped_coremask 00:05:47.411 ************************************ 00:05:47.411 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:47.411 16:19:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=786475 00:05:47.411 16:19:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 786475 /var/tmp/spdk.sock 00:05:47.411 16:19:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:47.411 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 786475 ']' 00:05:47.411 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.411 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.411 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.411 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.411 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.411 [2024-12-14 16:19:17.407583] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:47.411 [2024-12-14 16:19:17.407626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786475 ] 00:05:47.411 [2024-12-14 16:19:17.480847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.670 [2024-12-14 16:19:17.505799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.670 [2024-12-14 16:19:17.505905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.670 [2024-12-14 16:19:17.505906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=786561 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 786561 /var/tmp/spdk2.sock 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 786561 /var/tmp/spdk2.sock 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 786561 /var/tmp/spdk2.sock 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 786561 ']' 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.670 16:19:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.929 [2024-12-14 16:19:17.761144] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:47.929 [2024-12-14 16:19:17.761192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786561 ] 00:05:47.929 [2024-12-14 16:19:17.853505] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 786475 has claimed it. 00:05:47.929 [2024-12-14 16:19:17.853546] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (786561) - No such process 00:05:48.497 ERROR: process (pid: 786561) is no longer running 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 786475 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 786475 ']' 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 786475 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786475 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786475' 00:05:48.497 killing process with pid 786475 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 786475 00:05:48.497 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 786475 00:05:48.756 00:05:48.756 real 0m1.389s 00:05:48.756 user 0m3.881s 00:05:48.756 sys 0m0.382s 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.756 ************************************ 00:05:48.756 END TEST locking_overlapped_coremask 00:05:48.756 ************************************ 00:05:48.756 16:19:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:48.756 16:19:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.756 16:19:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.756 16:19:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.756 ************************************ 00:05:48.756 START TEST locking_overlapped_coremask_via_rpc 00:05:48.756 ************************************ 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=786808 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 786808 /var/tmp/spdk.sock 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786808 ']' 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.756 16:19:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.015 [2024-12-14 16:19:18.867395] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:49.015 [2024-12-14 16:19:18.867439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786808 ] 00:05:49.015 [2024-12-14 16:19:18.943084] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.015 [2024-12-14 16:19:18.943113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.015 [2024-12-14 16:19:18.968257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.015 [2024-12-14 16:19:18.968362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.015 [2024-12-14 16:19:18.968363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.274 16:19:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.274 16:19:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.274 16:19:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=786823 00:05:49.274 16:19:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 786823 /var/tmp/spdk2.sock 00:05:49.274 16:19:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:49.274 16:19:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786823 ']' 00:05:49.274 16:19:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.274 16:19:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.274 16:19:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.274 16:19:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.274 16:19:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.274 [2024-12-14 16:19:19.218497] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:49.274 [2024-12-14 16:19:19.218541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786823 ] 00:05:49.274 [2024-12-14 16:19:19.305435] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.274 [2024-12-14 16:19:19.305461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.274 [2024-12-14 16:19:19.357887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.274 [2024-12-14 16:19:19.357917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.274 [2024-12-14 16:19:19.357918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.211 [2024-12-14 16:19:20.083628] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 786808 has claimed it. 00:05:50.211 request: 00:05:50.211 { 00:05:50.211 "method": "framework_enable_cpumask_locks", 00:05:50.211 "req_id": 1 00:05:50.211 } 00:05:50.211 Got JSON-RPC error response 00:05:50.211 response: 00:05:50.211 { 00:05:50.211 "code": -32603, 00:05:50.211 "message": "Failed to claim CPU core: 2" 00:05:50.211 } 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 786808 /var/tmp/spdk.sock 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786808 ']' 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.211 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 786823 /var/tmp/spdk2.sock 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786823 ']' 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.470 00:05:50.470 real 0m1.702s 00:05:50.470 user 0m0.859s 00:05:50.470 sys 0m0.122s 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.470 16:19:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.470 ************************************ 00:05:50.470 END TEST locking_overlapped_coremask_via_rpc 00:05:50.470 ************************************ 00:05:50.470 16:19:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:50.470 16:19:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 786808 ]] 00:05:50.470 16:19:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 786808 00:05:50.470 16:19:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786808 ']' 00:05:50.470 16:19:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786808 00:05:50.470 16:19:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:50.470 16:19:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.729 16:19:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786808 00:05:50.729 16:19:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.729 16:19:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.729 16:19:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786808' 00:05:50.729 killing process with pid 786808 00:05:50.729 16:19:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 786808 00:05:50.729 16:19:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 786808 00:05:50.988 16:19:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 786823 ]] 00:05:50.988 16:19:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 786823 00:05:50.988 16:19:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786823 ']' 00:05:50.988 16:19:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786823 00:05:50.988 16:19:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:50.988 16:19:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.988 16:19:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786823 00:05:50.988 16:19:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:50.988 16:19:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:50.988 16:19:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786823' 00:05:50.988 killing process with pid 786823 00:05:50.988 16:19:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 786823 00:05:50.988 16:19:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 786823 00:05:51.248 16:19:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.248 16:19:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:51.248 16:19:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 786808 ]] 00:05:51.248 16:19:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 786808 00:05:51.248 16:19:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786808 ']' 00:05:51.248 16:19:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786808 00:05:51.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (786808) - No such process 00:05:51.248 16:19:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 786808 is not found' 00:05:51.248 Process with pid 786808 is not found 00:05:51.248 16:19:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 786823 ]] 00:05:51.248 16:19:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 786823 00:05:51.248 16:19:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786823 ']' 00:05:51.248 16:19:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786823 00:05:51.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (786823) - No such process 00:05:51.248 16:19:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 786823 is not found' 00:05:51.248 Process with pid 786823 is not found 00:05:51.248 16:19:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.248 00:05:51.248 real 0m14.131s 00:05:51.248 user 0m24.559s 00:05:51.248 sys 0m5.083s 00:05:51.248 16:19:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.248 16:19:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.248 ************************************ 00:05:51.248 END TEST cpu_locks 00:05:51.248 ************************************ 00:05:51.248 00:05:51.248 real 0m38.758s 00:05:51.248 user 1m13.911s 00:05:51.248 sys 0m8.540s 00:05:51.248 16:19:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.248 16:19:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.248 ************************************ 00:05:51.248 END TEST event 00:05:51.248 ************************************ 00:05:51.248 16:19:21 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:51.248 16:19:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.248 16:19:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.248 16:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:51.507 ************************************ 00:05:51.507 START TEST thread 00:05:51.507 ************************************ 00:05:51.507 16:19:21 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:51.507 * Looking for test storage... 00:05:51.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:51.507 16:19:21 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.507 16:19:21 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.507 16:19:21 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.507 16:19:21 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.507 16:19:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.507 16:19:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.507 16:19:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.507 16:19:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.507 16:19:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.507 16:19:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.507 16:19:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.507 16:19:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.507 16:19:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.507 16:19:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.507 16:19:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.507 16:19:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:51.507 16:19:21 thread -- scripts/common.sh@345 -- # : 1 00:05:51.507 16:19:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.507 16:19:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.507 16:19:21 thread -- scripts/common.sh@365 -- # decimal 1 00:05:51.507 16:19:21 thread -- scripts/common.sh@353 -- # local d=1 00:05:51.507 16:19:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.507 16:19:21 thread -- scripts/common.sh@355 -- # echo 1 00:05:51.507 16:19:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.507 16:19:21 thread -- scripts/common.sh@366 -- # decimal 2 00:05:51.507 16:19:21 thread -- scripts/common.sh@353 -- # local d=2 00:05:51.507 16:19:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.507 16:19:21 thread -- scripts/common.sh@355 -- # echo 2 00:05:51.507 16:19:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.507 16:19:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.507 16:19:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.507 16:19:21 thread -- scripts/common.sh@368 -- # return 0 00:05:51.508 16:19:21 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.508 16:19:21 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.508 --rc genhtml_branch_coverage=1 00:05:51.508 --rc genhtml_function_coverage=1 00:05:51.508 --rc genhtml_legend=1 00:05:51.508 --rc geninfo_all_blocks=1 00:05:51.508 --rc geninfo_unexecuted_blocks=1 00:05:51.508 00:05:51.508 ' 00:05:51.508 16:19:21 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.508 --rc genhtml_branch_coverage=1 00:05:51.508 --rc genhtml_function_coverage=1 00:05:51.508 --rc genhtml_legend=1 00:05:51.508 --rc geninfo_all_blocks=1 00:05:51.508 --rc geninfo_unexecuted_blocks=1 00:05:51.508 00:05:51.508 ' 00:05:51.508 16:19:21 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.508 --rc genhtml_branch_coverage=1 00:05:51.508 --rc genhtml_function_coverage=1 00:05:51.508 --rc genhtml_legend=1 00:05:51.508 --rc geninfo_all_blocks=1 00:05:51.508 --rc geninfo_unexecuted_blocks=1 00:05:51.508 00:05:51.508 ' 00:05:51.508 16:19:21 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.508 --rc genhtml_branch_coverage=1 00:05:51.508 --rc genhtml_function_coverage=1 00:05:51.508 --rc genhtml_legend=1 00:05:51.508 --rc geninfo_all_blocks=1 00:05:51.508 --rc geninfo_unexecuted_blocks=1 00:05:51.508 00:05:51.508 ' 00:05:51.508 16:19:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.508 16:19:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:51.508 16:19:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.508 16:19:21 thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.508 ************************************ 00:05:51.508 START TEST thread_poller_perf 00:05:51.508 ************************************ 00:05:51.508 16:19:21 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.766 [2024-12-14 16:19:21.604733] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:51.766 [2024-12-14 16:19:21.604802] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787370 ] 00:05:51.766 [2024-12-14 16:19:21.685807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.766 [2024-12-14 16:19:21.707501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.766 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:52.702 [2024-12-14T15:19:22.788Z] ====================================== 00:05:52.702 [2024-12-14T15:19:22.788Z] busy:2107634214 (cyc) 00:05:52.702 [2024-12-14T15:19:22.788Z] total_run_count: 419000 00:05:52.702 [2024-12-14T15:19:22.788Z] tsc_hz: 2100000000 (cyc) 00:05:52.702 [2024-12-14T15:19:22.788Z] ====================================== 00:05:52.702 [2024-12-14T15:19:22.788Z] poller_cost: 5030 (cyc), 2395 (nsec) 00:05:52.702 00:05:52.702 real 0m1.167s 00:05:52.702 user 0m1.084s 00:05:52.702 sys 0m0.079s 00:05:52.702 16:19:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.702 16:19:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.702 ************************************ 00:05:52.702 END TEST thread_poller_perf 00:05:52.702 ************************************ 00:05:52.702 16:19:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.702 16:19:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:52.702 16:19:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.702 16:19:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.961 ************************************ 00:05:52.961 START TEST thread_poller_perf 00:05:52.961 ************************************ 00:05:52.961 16:19:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:52.961 [2024-12-14 16:19:22.842154] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:52.961 [2024-12-14 16:19:22.842225] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787556 ] 00:05:52.961 [2024-12-14 16:19:22.919137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.961 [2024-12-14 16:19:22.941126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.961 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:53.898 [2024-12-14T15:19:23.984Z] ====================================== 00:05:53.898 [2024-12-14T15:19:23.984Z] busy:2101528478 (cyc) 00:05:53.898 [2024-12-14T15:19:23.984Z] total_run_count: 5180000 00:05:53.898 [2024-12-14T15:19:23.984Z] tsc_hz: 2100000000 (cyc) 00:05:53.898 [2024-12-14T15:19:23.984Z] ====================================== 00:05:53.898 [2024-12-14T15:19:23.984Z] poller_cost: 405 (cyc), 192 (nsec) 00:05:53.898 00:05:53.898 real 0m1.152s 00:05:53.898 user 0m1.070s 00:05:53.898 sys 0m0.079s 00:05:53.898 16:19:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.898 16:19:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.898 ************************************ 00:05:53.898 END TEST thread_poller_perf 00:05:53.898 ************************************ 00:05:54.157 16:19:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:54.157 00:05:54.157 real 0m2.643s 00:05:54.157 user 0m2.308s 00:05:54.157 sys 0m0.352s 00:05:54.157 16:19:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.157 16:19:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.157 ************************************ 00:05:54.157 END TEST thread 00:05:54.158 ************************************ 00:05:54.158 16:19:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:54.158 16:19:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:54.158 16:19:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.158 16:19:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.158 16:19:24 -- common/autotest_common.sh@10 -- # set +x 00:05:54.158 ************************************ 00:05:54.158 START TEST app_cmdline 00:05:54.158 ************************************ 00:05:54.158 16:19:24 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:54.158 * Looking for test storage... 00:05:54.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:54.158 16:19:24 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.158 16:19:24 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.158 16:19:24 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.417 16:19:24 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:54.417 16:19:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:54.418 16:19:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.418 16:19:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:54.418 16:19:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.418 16:19:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.418 16:19:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.418 16:19:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:54.418 16:19:24 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.418 16:19:24 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.418 --rc genhtml_branch_coverage=1 00:05:54.418 --rc genhtml_function_coverage=1 00:05:54.418 --rc genhtml_legend=1 00:05:54.418 --rc geninfo_all_blocks=1 00:05:54.418 --rc geninfo_unexecuted_blocks=1 00:05:54.418 00:05:54.418 ' 00:05:54.418 16:19:24 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.418 --rc genhtml_branch_coverage=1 00:05:54.418 --rc genhtml_function_coverage=1 00:05:54.418 --rc genhtml_legend=1 00:05:54.418 --rc geninfo_all_blocks=1 00:05:54.418 --rc geninfo_unexecuted_blocks=1 00:05:54.418 00:05:54.418 ' 00:05:54.418 16:19:24 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.418 --rc genhtml_branch_coverage=1 00:05:54.418 --rc genhtml_function_coverage=1 00:05:54.418 --rc genhtml_legend=1 00:05:54.418 --rc geninfo_all_blocks=1 00:05:54.418 --rc geninfo_unexecuted_blocks=1 00:05:54.418 00:05:54.418 ' 00:05:54.418 16:19:24 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.418 --rc genhtml_branch_coverage=1 00:05:54.418 --rc genhtml_function_coverage=1 00:05:54.418 --rc genhtml_legend=1 00:05:54.418 --rc geninfo_all_blocks=1 00:05:54.418 --rc geninfo_unexecuted_blocks=1 00:05:54.418 00:05:54.418 ' 00:05:54.418 16:19:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:54.418 16:19:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=787888 00:05:54.418 16:19:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 787888 00:05:54.418 16:19:24 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:54.418 16:19:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 787888 ']' 00:05:54.418 16:19:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.418 16:19:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.418 16:19:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.418 16:19:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.418 16:19:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:54.418 [2024-12-14 16:19:24.309220] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:54.418 [2024-12-14 16:19:24.309268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787888 ] 00:05:54.418 [2024-12-14 16:19:24.385456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.418 [2024-12-14 16:19:24.408151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.677 16:19:24 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.677 16:19:24 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:54.677 16:19:24 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:54.936 { 00:05:54.936 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:05:54.936 "fields": { 00:05:54.936 "major": 25, 00:05:54.936 "minor": 1, 00:05:54.936 "patch": 0, 00:05:54.936 "suffix": "-pre", 00:05:54.936 "commit": "e01cb43b8" 00:05:54.936 } 00:05:54.936 } 00:05:54.936 16:19:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:54.936 16:19:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:54.936 16:19:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:54.936 16:19:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:54.936 16:19:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:54.936 16:19:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:54.936 16:19:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.936 16:19:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:54.936 16:19:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:54.936 16:19:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:54.936 16:19:24 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.936 request: 00:05:54.936 { 00:05:54.936 "method": "env_dpdk_get_mem_stats", 00:05:54.936 "req_id": 1 00:05:54.936 } 00:05:54.936 Got JSON-RPC error response 00:05:54.936 response: 00:05:54.936 { 00:05:54.936 "code": -32601, 00:05:54.936 "message": "Method not found" 00:05:54.936 } 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.197 16:19:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 787888 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 787888 ']' 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 787888 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787888 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787888' 00:05:55.197 killing process with pid 787888 00:05:55.197 16:19:25 app_cmdline -- common/autotest_common.sh@973 -- # kill 787888 00:05:55.198 16:19:25 app_cmdline -- common/autotest_common.sh@978 -- # wait 787888 00:05:55.457 00:05:55.457 real 0m1.303s 00:05:55.457 user 0m1.517s 00:05:55.457 sys 0m0.456s 00:05:55.457 16:19:25 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.457 16:19:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.457 ************************************ 00:05:55.457 END TEST app_cmdline 00:05:55.457 ************************************ 00:05:55.457 16:19:25 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:55.457 16:19:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.457 16:19:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.457 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.457 ************************************ 00:05:55.457 START TEST version 00:05:55.457 ************************************ 00:05:55.457 16:19:25 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:55.457 * Looking for test storage... 00:05:55.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:55.716 16:19:25 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:55.716 16:19:25 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:55.716 16:19:25 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:55.716 16:19:25 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:55.716 16:19:25 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.716 16:19:25 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.716 16:19:25 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.716 16:19:25 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.716 16:19:25 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.716 16:19:25 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.716 16:19:25 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.716 16:19:25 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.716 16:19:25 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.716 16:19:25 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.716 16:19:25 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.716 16:19:25 version -- scripts/common.sh@344 -- # case "$op" in 00:05:55.716 16:19:25 version -- scripts/common.sh@345 -- # : 1 00:05:55.716 16:19:25 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.716 16:19:25 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.716 16:19:25 version -- scripts/common.sh@365 -- # decimal 1 00:05:55.716 16:19:25 version -- scripts/common.sh@353 -- # local d=1 00:05:55.716 16:19:25 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.716 16:19:25 version -- scripts/common.sh@355 -- # echo 1 00:05:55.716 16:19:25 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.716 16:19:25 version -- scripts/common.sh@366 -- # decimal 2 00:05:55.716 16:19:25 version -- scripts/common.sh@353 -- # local d=2 00:05:55.716 16:19:25 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.716 16:19:25 version -- scripts/common.sh@355 -- # echo 2 00:05:55.716 16:19:25 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.716 16:19:25 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.716 16:19:25 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.716 16:19:25 version -- scripts/common.sh@368 -- # return 0 00:05:55.716 16:19:25 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.716 16:19:25 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:55.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.716 --rc genhtml_branch_coverage=1 00:05:55.716 --rc genhtml_function_coverage=1 00:05:55.716 --rc genhtml_legend=1 00:05:55.716 --rc geninfo_all_blocks=1 00:05:55.716 --rc geninfo_unexecuted_blocks=1 00:05:55.716 00:05:55.716 ' 00:05:55.716 16:19:25 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:55.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.717 --rc genhtml_branch_coverage=1 00:05:55.717 --rc genhtml_function_coverage=1 00:05:55.717 --rc genhtml_legend=1 00:05:55.717 --rc geninfo_all_blocks=1 00:05:55.717 --rc geninfo_unexecuted_blocks=1 00:05:55.717 00:05:55.717 ' 00:05:55.717 16:19:25 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:55.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.717 --rc genhtml_branch_coverage=1 00:05:55.717 --rc genhtml_function_coverage=1 00:05:55.717 --rc genhtml_legend=1 00:05:55.717 --rc geninfo_all_blocks=1 00:05:55.717 --rc geninfo_unexecuted_blocks=1 00:05:55.717 00:05:55.717 ' 00:05:55.717 16:19:25 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:55.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.717 --rc genhtml_branch_coverage=1 00:05:55.717 --rc genhtml_function_coverage=1 00:05:55.717 --rc genhtml_legend=1 00:05:55.717 --rc geninfo_all_blocks=1 00:05:55.717 --rc geninfo_unexecuted_blocks=1 00:05:55.717 00:05:55.717 ' 00:05:55.717 16:19:25 version -- app/version.sh@17 -- # get_header_version major 00:05:55.717 16:19:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:55.717 16:19:25 version -- app/version.sh@14 -- # cut -f2 00:05:55.717 16:19:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.717 16:19:25 version -- app/version.sh@17 -- # major=25 00:05:55.717 16:19:25 version -- app/version.sh@18 -- # get_header_version minor 00:05:55.717 16:19:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:55.717 16:19:25 version -- app/version.sh@14 -- # cut -f2 00:05:55.717 16:19:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.717 16:19:25 version -- app/version.sh@18 -- # minor=1 00:05:55.717 16:19:25 version -- app/version.sh@19 -- # get_header_version patch 00:05:55.717 16:19:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:55.717 16:19:25 version -- app/version.sh@14 -- # cut -f2 00:05:55.717 16:19:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.717 16:19:25 version -- app/version.sh@19 -- # patch=0 00:05:55.717 16:19:25 version -- app/version.sh@20 -- # get_header_version suffix 00:05:55.717 16:19:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:55.717 16:19:25 version -- app/version.sh@14 -- # cut -f2 00:05:55.717 16:19:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.717 16:19:25 version -- app/version.sh@20 -- # suffix=-pre 00:05:55.717 16:19:25 version -- app/version.sh@22 -- # version=25.1 00:05:55.717 16:19:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:55.717 16:19:25 version -- app/version.sh@28 -- # version=25.1rc0 00:05:55.717 16:19:25 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:55.717 16:19:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:55.717 16:19:25 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:55.717 16:19:25 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:55.717 00:05:55.717 real 0m0.253s 00:05:55.717 user 0m0.160s 00:05:55.717 sys 0m0.135s 00:05:55.717 16:19:25 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.717 16:19:25 version -- common/autotest_common.sh@10 -- # set +x 00:05:55.717 ************************************ 00:05:55.717 END TEST version 00:05:55.717 ************************************ 00:05:55.717 16:19:25 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:55.717 16:19:25 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:55.717 16:19:25 -- spdk/autotest.sh@194 -- # uname -s 00:05:55.717 16:19:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:55.717 16:19:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:55.717 16:19:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:55.717 16:19:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:55.717 16:19:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:55.717 16:19:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:55.717 16:19:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:55.717 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.717 16:19:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:55.717 16:19:25 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:55.717 16:19:25 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:55.717 16:19:25 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:55.717 16:19:25 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:55.717 16:19:25 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:55.717 16:19:25 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:55.717 16:19:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:55.717 16:19:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.717 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.976 ************************************ 00:05:55.976 START TEST nvmf_tcp 00:05:55.976 ************************************ 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:55.976 * Looking for test storage... 00:05:55.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.976 16:19:25 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:55.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.976 --rc genhtml_branch_coverage=1 00:05:55.976 --rc genhtml_function_coverage=1 00:05:55.976 --rc genhtml_legend=1 00:05:55.976 --rc geninfo_all_blocks=1 00:05:55.976 --rc geninfo_unexecuted_blocks=1 00:05:55.976 00:05:55.976 ' 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:55.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.976 --rc genhtml_branch_coverage=1 00:05:55.976 --rc genhtml_function_coverage=1 00:05:55.976 --rc genhtml_legend=1 00:05:55.976 --rc geninfo_all_blocks=1 00:05:55.976 --rc geninfo_unexecuted_blocks=1 00:05:55.976 00:05:55.976 ' 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:55.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.976 --rc genhtml_branch_coverage=1 00:05:55.976 --rc genhtml_function_coverage=1 00:05:55.976 --rc genhtml_legend=1 00:05:55.976 --rc geninfo_all_blocks=1 00:05:55.976 --rc geninfo_unexecuted_blocks=1 00:05:55.976 00:05:55.976 ' 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:55.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.976 --rc genhtml_branch_coverage=1 00:05:55.976 --rc genhtml_function_coverage=1 00:05:55.976 --rc genhtml_legend=1 00:05:55.976 --rc geninfo_all_blocks=1 00:05:55.976 --rc geninfo_unexecuted_blocks=1 00:05:55.976 00:05:55.976 ' 00:05:55.976 16:19:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:55.976 16:19:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:55.976 16:19:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.976 16:19:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.976 ************************************ 00:05:55.976 START TEST nvmf_target_core 00:05:55.976 ************************************ 00:05:55.976 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:56.236 * Looking for test storage... 00:05:56.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.236 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.237 --rc genhtml_branch_coverage=1 00:05:56.237 --rc genhtml_function_coverage=1 00:05:56.237 --rc genhtml_legend=1 00:05:56.237 --rc geninfo_all_blocks=1 00:05:56.237 --rc geninfo_unexecuted_blocks=1 00:05:56.237 00:05:56.237 ' 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.237 --rc genhtml_branch_coverage=1 00:05:56.237 --rc genhtml_function_coverage=1 00:05:56.237 --rc genhtml_legend=1 00:05:56.237 --rc geninfo_all_blocks=1 00:05:56.237 --rc geninfo_unexecuted_blocks=1 00:05:56.237 00:05:56.237 ' 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.237 --rc genhtml_branch_coverage=1 00:05:56.237 --rc genhtml_function_coverage=1 00:05:56.237 --rc genhtml_legend=1 00:05:56.237 --rc geninfo_all_blocks=1 00:05:56.237 --rc geninfo_unexecuted_blocks=1 00:05:56.237 00:05:56.237 ' 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.237 --rc genhtml_branch_coverage=1 00:05:56.237 --rc genhtml_function_coverage=1 00:05:56.237 --rc genhtml_legend=1 00:05:56.237 --rc geninfo_all_blocks=1 00:05:56.237 --rc geninfo_unexecuted_blocks=1 00:05:56.237 00:05:56.237 ' 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:56.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:56.237 ************************************ 00:05:56.237 START TEST nvmf_abort 00:05:56.237 ************************************ 00:05:56.237 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:56.497 * Looking for test storage... 00:05:56.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.497 --rc genhtml_branch_coverage=1 00:05:56.497 --rc genhtml_function_coverage=1 00:05:56.497 --rc genhtml_legend=1 00:05:56.497 --rc geninfo_all_blocks=1 00:05:56.497 --rc geninfo_unexecuted_blocks=1 00:05:56.497 00:05:56.497 ' 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.497 --rc genhtml_branch_coverage=1 00:05:56.497 --rc genhtml_function_coverage=1 00:05:56.497 --rc genhtml_legend=1 00:05:56.497 --rc geninfo_all_blocks=1 00:05:56.497 --rc geninfo_unexecuted_blocks=1 00:05:56.497 00:05:56.497 ' 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.497 --rc genhtml_branch_coverage=1 00:05:56.497 --rc genhtml_function_coverage=1 00:05:56.497 --rc genhtml_legend=1 00:05:56.497 --rc geninfo_all_blocks=1 00:05:56.497 --rc geninfo_unexecuted_blocks=1 00:05:56.497 00:05:56.497 ' 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.497 --rc genhtml_branch_coverage=1 00:05:56.497 --rc genhtml_function_coverage=1 00:05:56.497 --rc genhtml_legend=1 00:05:56.497 --rc geninfo_all_blocks=1 00:05:56.497 --rc geninfo_unexecuted_blocks=1 00:05:56.497 00:05:56.497 ' 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.497 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:56.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:56.498 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.069 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:03.070 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:03.070 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:03.070 Found net devices under 0000:af:00.0: cvl_0_0 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:03.070 Found net devices under 0000:af:00.1: cvl_0_1 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:03.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:06:03.070 00:06:03.070 --- 10.0.0.2 ping statistics --- 00:06:03.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.070 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:03.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:03.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:06:03.070 00:06:03.070 --- 10.0.0.1 ping statistics --- 00:06:03.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.070 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=791434 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 791434 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 791434 ']' 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.070 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.070 [2024-12-14 16:19:32.540852] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:03.071 [2024-12-14 16:19:32.540897] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.071 [2024-12-14 16:19:32.617909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.071 [2024-12-14 16:19:32.640768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:03.071 [2024-12-14 16:19:32.640809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:03.071 [2024-12-14 16:19:32.640817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.071 [2024-12-14 16:19:32.640834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.071 [2024-12-14 16:19:32.640839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:03.071 [2024-12-14 16:19:32.642049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.071 [2024-12-14 16:19:32.642135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.071 [2024-12-14 16:19:32.642134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.071 [2024-12-14 16:19:32.781282] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.071 Malloc0 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.071 Delay0 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.071 [2024-12-14 16:19:32.864222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.071 16:19:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:03.071 [2024-12-14 16:19:32.946182] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:04.976 Initializing NVMe Controllers 00:06:04.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:04.976 controller IO queue size 128 less than required 00:06:04.976 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:04.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:04.976 Initialization complete. Launching workers. 00:06:04.976 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37381 00:06:04.976 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37442, failed to submit 62 00:06:04.976 success 37385, unsuccessful 57, failed 0 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:04.976 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:04.976 rmmod nvme_tcp 00:06:04.976 rmmod nvme_fabrics 00:06:05.234 rmmod nvme_keyring 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 791434 ']' 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 791434 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 791434 ']' 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 791434 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 791434 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 791434' 00:06:05.234 killing process with pid 791434 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 791434 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 791434 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:05.234 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:05.235 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:05.235 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:05.235 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:05.235 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:05.493 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:05.493 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:05.493 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.493 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.493 16:19:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.398 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:07.398 00:06:07.398 real 0m11.119s 00:06:07.398 user 0m11.450s 00:06:07.398 sys 0m5.343s 00:06:07.398 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.398 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.399 ************************************ 00:06:07.399 END TEST nvmf_abort 00:06:07.399 ************************************ 00:06:07.399 16:19:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:07.399 16:19:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:07.399 16:19:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.399 16:19:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:07.399 ************************************ 00:06:07.399 START TEST nvmf_ns_hotplug_stress 00:06:07.399 ************************************ 00:06:07.399 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:07.658 * Looking for test storage... 00:06:07.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:07.658 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.658 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.658 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.659 --rc genhtml_branch_coverage=1 00:06:07.659 --rc genhtml_function_coverage=1 00:06:07.659 --rc genhtml_legend=1 00:06:07.659 --rc geninfo_all_blocks=1 00:06:07.659 --rc geninfo_unexecuted_blocks=1 00:06:07.659 00:06:07.659 ' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.659 --rc genhtml_branch_coverage=1 00:06:07.659 --rc genhtml_function_coverage=1 00:06:07.659 --rc genhtml_legend=1 00:06:07.659 --rc geninfo_all_blocks=1 00:06:07.659 --rc geninfo_unexecuted_blocks=1 00:06:07.659 00:06:07.659 ' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.659 --rc genhtml_branch_coverage=1 00:06:07.659 --rc genhtml_function_coverage=1 00:06:07.659 --rc genhtml_legend=1 00:06:07.659 --rc geninfo_all_blocks=1 00:06:07.659 --rc geninfo_unexecuted_blocks=1 00:06:07.659 00:06:07.659 ' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.659 --rc genhtml_branch_coverage=1 00:06:07.659 --rc genhtml_function_coverage=1 00:06:07.659 --rc genhtml_legend=1 00:06:07.659 --rc geninfo_all_blocks=1 00:06:07.659 --rc geninfo_unexecuted_blocks=1 00:06:07.659 00:06:07.659 ' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:07.659 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.660 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:07.660 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:07.660 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:07.660 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.660 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.660 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.660 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:07.660 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:07.660 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:07.660 16:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:14.233 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:14.233 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:14.233 Found net devices under 0000:af:00.0: cvl_0_0 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:14.233 Found net devices under 0000:af:00.1: cvl_0_1 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:14.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:06:14.233 00:06:14.233 --- 10.0.0.2 ping statistics --- 00:06:14.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.233 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:06:14.233 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:06:14.234 00:06:14.234 --- 10.0.0.1 ping statistics --- 00:06:14.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.234 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=795490 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 795490 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 795490 ']' 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.234 16:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.234 [2024-12-14 16:19:43.807888] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:14.234 [2024-12-14 16:19:43.807931] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.234 [2024-12-14 16:19:43.888800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.234 [2024-12-14 16:19:43.910617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.234 [2024-12-14 16:19:43.910655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.234 [2024-12-14 16:19:43.910662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.234 [2024-12-14 16:19:43.910667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.234 [2024-12-14 16:19:43.910672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.234 [2024-12-14 16:19:43.911979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.234 [2024-12-14 16:19:43.912085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.234 [2024-12-14 16:19:43.912087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.234 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.234 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:14.234 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:14.234 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.234 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.234 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:14.234 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:14.234 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:14.234 [2024-12-14 16:19:44.204307] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.234 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:14.493 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:14.751 [2024-12-14 16:19:44.601737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:14.751 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:14.751 16:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:15.010 Malloc0 00:06:15.010 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.268 Delay0 00:06:15.268 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.526 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:15.785 NULL1 00:06:15.785 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:15.785 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=795750 00:06:15.785 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:15.785 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:15.785 16:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.043 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.301 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:16.301 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:16.560 true 00:06:16.560 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:16.560 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.819 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.819 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:16.819 16:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:17.077 true 00:06:17.077 16:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:17.077 16:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.336 16:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.594 16:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:17.594 16:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:17.853 true 00:06:17.853 16:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:17.853 16:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.111 16:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.111 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:18.111 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:18.370 true 00:06:18.370 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:18.370 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.628 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.887 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:18.887 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:18.887 true 00:06:19.145 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:19.145 16:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.145 16:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.403 16:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:19.403 16:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:19.662 true 00:06:19.662 16:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:19.662 16:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.920 16:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.179 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:20.179 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:20.179 true 00:06:20.179 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:20.179 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.437 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.696 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:20.696 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:20.954 true 00:06:20.954 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:20.954 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.213 16:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.472 16:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:21.472 16:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:21.472 true 00:06:21.472 16:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:21.472 16:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.730 16:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.989 16:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:21.989 16:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:22.248 true 00:06:22.248 16:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:22.248 16:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.507 16:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.766 16:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:22.766 16:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:22.766 true 00:06:22.766 16:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:22.766 16:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.024 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.283 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:23.283 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:23.542 true 00:06:23.542 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:23.542 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.800 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.800 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:23.800 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:24.059 true 00:06:24.059 16:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:24.059 16:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.317 16:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.575 16:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:24.575 16:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:24.575 true 00:06:24.834 16:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:24.834 16:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.834 16:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.092 16:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:25.092 16:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:25.351 true 00:06:25.351 16:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:25.351 16:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.609 16:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.867 16:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:25.867 16:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:25.867 true 00:06:25.867 16:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:25.867 16:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.126 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.384 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:26.384 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:26.642 true 00:06:26.642 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:26.642 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.901 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.901 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:26.901 16:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:27.160 true 00:06:27.160 16:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:27.160 16:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.418 16:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.677 16:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:27.677 16:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:27.677 true 00:06:27.677 16:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:27.677 16:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.935 16:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.194 16:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:28.194 16:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:28.452 true 00:06:28.453 16:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:28.453 16:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.711 16:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.711 16:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:28.711 16:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:28.970 true 00:06:28.970 16:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:28.970 16:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.228 16:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.487 16:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:29.487 16:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:29.745 true 00:06:29.745 16:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:29.745 16:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.745 16:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.004 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:30.004 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:30.263 true 00:06:30.263 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:30.263 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.521 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.780 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:30.780 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:30.780 true 00:06:31.038 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:31.038 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.038 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.297 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:31.297 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:31.555 true 00:06:31.555 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:31.555 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.814 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.072 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:32.072 16:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:32.072 true 00:06:32.072 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:32.072 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.331 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.589 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:32.589 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:32.848 true 00:06:32.848 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:32.848 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.107 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.107 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:33.107 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:33.366 true 00:06:33.366 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:33.366 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.625 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.884 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:33.884 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:34.142 true 00:06:34.142 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:34.142 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.401 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.401 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:34.401 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:34.660 true 00:06:34.660 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:34.660 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.918 16:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.184 16:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:35.184 16:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:35.445 true 00:06:35.445 16:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:35.445 16:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.703 16:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.703 16:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:35.703 16:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:35.962 true 00:06:35.962 16:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:35.962 16:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.220 16:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.479 16:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:36.479 16:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:36.738 true 00:06:36.738 16:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:36.738 16:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.996 16:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.996 16:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:36.996 16:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:37.254 true 00:06:37.254 16:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:37.254 16:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.512 16:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.770 16:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:37.770 16:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:38.028 true 00:06:38.028 16:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:38.028 16:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.287 16:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.287 16:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:38.287 16:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:38.545 true 00:06:38.545 16:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:38.545 16:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.804 16:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.062 16:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:39.062 16:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:39.321 true 00:06:39.321 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:39.321 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.579 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.579 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:39.579 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:39.837 true 00:06:39.837 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:39.837 16:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.096 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.355 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:40.355 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:40.614 true 00:06:40.614 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:40.614 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.875 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.875 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:40.875 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:41.133 true 00:06:41.133 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:41.133 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.394 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.652 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:41.652 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:41.909 true 00:06:41.909 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:41.909 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.168 16:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.168 16:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:42.168 16:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:42.427 true 00:06:42.427 16:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:42.427 16:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.685 16:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.944 16:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:42.944 16:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:43.202 true 00:06:43.202 16:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:43.202 16:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.459 16:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.459 16:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:43.459 16:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:43.721 true 00:06:43.721 16:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:43.721 16:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.037 16:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.305 16:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:44.305 16:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:44.305 true 00:06:44.305 16:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:44.305 16:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.574 16:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.869 16:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:44.869 16:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:45.168 true 00:06:45.168 16:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:45.168 16:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.168 16:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.447 16:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:45.447 16:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:45.705 true 00:06:45.705 16:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:45.705 16:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.963 16:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.222 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:46.222 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:46.222 Initializing NVMe Controllers 00:06:46.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:46.222 Controller IO queue size 128, less than required. 00:06:46.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:46.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:46.222 Initialization complete. Launching workers. 00:06:46.222 ======================================================== 00:06:46.222 Latency(us) 00:06:46.222 Device Information : IOPS MiB/s Average min max 00:06:46.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27679.03 13.52 4624.28 2077.07 8627.63 00:06:46.222 ======================================================== 00:06:46.222 Total : 27679.03 13.52 4624.28 2077.07 8627.63 00:06:46.222 00:06:46.222 true 00:06:46.222 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795750 00:06:46.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (795750) - No such process 00:06:46.222 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 795750 00:06:46.222 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.480 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.738 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:46.738 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:46.738 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:46.738 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:46.738 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:46.997 null0 00:06:46.997 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:46.997 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:46.997 16:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:46.997 null1 00:06:46.997 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:46.997 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:46.997 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:47.255 null2 00:06:47.255 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.255 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.255 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:47.514 null3 00:06:47.514 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.514 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.514 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:47.773 null4 00:06:47.773 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.773 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.773 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:48.032 null5 00:06:48.032 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.032 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.032 16:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:48.032 null6 00:06:48.032 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.032 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.032 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:48.291 null7 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.291 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 801270 801271 801273 801275 801277 801279 801281 801282 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.292 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:48.550 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:48.550 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.550 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:48.550 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:48.550 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:48.550 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:48.550 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:48.550 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:48.810 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.069 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.069 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.069 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.069 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.069 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.069 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.069 16:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.069 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.328 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.328 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.328 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.328 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.328 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.328 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.328 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.328 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.328 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.328 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.328 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.587 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.587 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.587 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.587 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.587 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.588 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.846 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.847 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.105 16:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.105 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.105 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.105 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.105 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.105 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.105 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.105 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.364 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.623 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.623 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.623 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.623 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.623 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.623 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.623 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.623 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.881 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.140 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.140 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.140 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.140 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.140 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.140 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.140 16:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.140 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.400 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.400 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.400 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.400 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.400 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.400 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.400 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.400 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.659 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.918 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.918 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.918 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.918 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.918 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.918 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.918 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.918 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.176 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.176 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.176 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.176 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.176 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.176 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.176 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.176 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.176 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.176 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.177 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:52.436 rmmod nvme_tcp 00:06:52.436 rmmod nvme_fabrics 00:06:52.436 rmmod nvme_keyring 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 795490 ']' 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 795490 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 795490 ']' 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 795490 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.436 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 795490 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 795490' 00:06:52.695 killing process with pid 795490 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 795490 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 795490 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.695 16:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.229 16:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:55.229 00:06:55.229 real 0m47.328s 00:06:55.229 user 3m21.481s 00:06:55.229 sys 0m16.830s 00:06:55.229 16:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.229 16:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.229 ************************************ 00:06:55.229 END TEST nvmf_ns_hotplug_stress 00:06:55.229 ************************************ 00:06:55.229 16:20:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:55.229 16:20:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.229 16:20:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.229 16:20:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:55.229 ************************************ 00:06:55.229 START TEST nvmf_delete_subsystem 00:06:55.229 ************************************ 00:06:55.229 16:20:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:55.229 * Looking for test storage... 00:06:55.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.229 16:20:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.229 16:20:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.229 16:20:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.229 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.230 --rc genhtml_branch_coverage=1 00:06:55.230 --rc genhtml_function_coverage=1 00:06:55.230 --rc genhtml_legend=1 00:06:55.230 --rc geninfo_all_blocks=1 00:06:55.230 --rc geninfo_unexecuted_blocks=1 00:06:55.230 00:06:55.230 ' 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.230 --rc genhtml_branch_coverage=1 00:06:55.230 --rc genhtml_function_coverage=1 00:06:55.230 --rc genhtml_legend=1 00:06:55.230 --rc geninfo_all_blocks=1 00:06:55.230 --rc geninfo_unexecuted_blocks=1 00:06:55.230 00:06:55.230 ' 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.230 --rc genhtml_branch_coverage=1 00:06:55.230 --rc genhtml_function_coverage=1 00:06:55.230 --rc genhtml_legend=1 00:06:55.230 --rc geninfo_all_blocks=1 00:06:55.230 --rc geninfo_unexecuted_blocks=1 00:06:55.230 00:06:55.230 ' 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.230 --rc genhtml_branch_coverage=1 00:06:55.230 --rc genhtml_function_coverage=1 00:06:55.230 --rc genhtml_legend=1 00:06:55.230 --rc geninfo_all_blocks=1 00:06:55.230 --rc geninfo_unexecuted_blocks=1 00:06:55.230 00:06:55.230 ' 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:55.230 16:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:01.796 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:01.796 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:01.796 Found net devices under 0000:af:00.0: cvl_0_0 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:01.796 Found net devices under 0000:af:00.1: cvl_0_1 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.796 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.797 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:01.797 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:01.797 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.797 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.797 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.797 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.797 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:01.797 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.797 16:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:01.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:07:01.797 00:07:01.797 --- 10.0.0.2 ping statistics --- 00:07:01.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.797 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:07:01.797 00:07:01.797 --- 10.0.0.1 ping statistics --- 00:07:01.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.797 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=805687 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 805687 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 805687 ']' 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.797 [2024-12-14 16:20:31.126372] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:01.797 [2024-12-14 16:20:31.126414] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.797 [2024-12-14 16:20:31.201340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.797 [2024-12-14 16:20:31.222836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.797 [2024-12-14 16:20:31.222873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.797 [2024-12-14 16:20:31.222881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.797 [2024-12-14 16:20:31.222887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.797 [2024-12-14 16:20:31.222893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.797 [2024-12-14 16:20:31.223951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.797 [2024-12-14 16:20:31.223952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.797 [2024-12-14 16:20:31.363403] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.797 [2024-12-14 16:20:31.383813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.797 NULL1 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.797 Delay0 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=805823 00:07:01.797 16:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:01.797 [2024-12-14 16:20:31.474699] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:03.701 16:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:03.701 16:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.701 16:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 starting I/O failed: -6 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 starting I/O failed: -6 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 starting I/O failed: -6 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 starting I/O failed: -6 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 starting I/O failed: -6 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 starting I/O failed: -6 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 starting I/O failed: -6 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 starting I/O failed: -6 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 starting I/O failed: -6 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 [2024-12-14 16:20:33.722015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166af70 is same with the state(6) to be set 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Write completed with error (sct=0, sc=8) 00:07:03.701 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 [2024-12-14 16:20:33.722501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166b400 is same with the state(6) to be set 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 starting I/O failed: -6 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 starting I/O failed: -6 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 starting I/O failed: -6 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 starting I/O failed: -6 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 starting I/O failed: -6 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 starting I/O failed: -6 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 starting I/O failed: -6 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 starting I/O failed: -6 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 starting I/O failed: -6 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 starting I/O failed: -6 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Read completed with error (sct=0, sc=8) 00:07:03.702 Write completed with error (sct=0, sc=8) 00:07:03.702 starting I/O failed: -6 00:07:03.702 starting I/O failed: -6 00:07:03.702 starting I/O failed: -6 00:07:03.702 starting I/O failed: -6 00:07:04.637 [2024-12-14 16:20:34.693487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1669190 is same with the state(6) to be set 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 [2024-12-14 16:20:34.725494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166b5e0 is same with the state(6) to be set 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 [2024-12-14 16:20:34.726459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe074000c80 is same with the state(6) to be set 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 [2024-12-14 16:20:34.726623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe07400d060 is same with the state(6) to be set 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Write completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 Read completed with error (sct=0, sc=8) 00:07:04.913 [2024-12-14 16:20:34.727156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe07400d800 is same with the state(6) to be set 00:07:04.913 Initializing NVMe Controllers 00:07:04.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:04.913 Controller IO queue size 128, less than required. 00:07:04.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:04.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:04.913 Initialization complete. Launching workers. 00:07:04.913 ======================================================== 00:07:04.913 Latency(us) 00:07:04.913 Device Information : IOPS MiB/s Average min max 00:07:04.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.66 0.08 878709.28 287.27 1008791.84 00:07:04.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.57 0.08 1053294.16 352.82 2001562.66 00:07:04.913 ======================================================== 00:07:04.913 Total : 326.23 0.16 970526.03 287.27 2001562.66 00:07:04.913 00:07:04.913 [2024-12-14 16:20:34.727674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1669190 (9): Bad file descriptor 00:07:04.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:04.913 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.913 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:04.913 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 805823 00:07:04.913 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 805823 00:07:05.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (805823) - No such process 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 805823 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 805823 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 805823 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.172 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.172 [2024-12-14 16:20:35.253560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.430 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.430 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.430 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.430 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.430 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.430 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=806458 00:07:05.430 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:05.430 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:05.430 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806458 00:07:05.430 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.430 [2024-12-14 16:20:35.347324] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:05.689 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.689 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806458 00:07:05.689 16:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.255 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.255 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806458 00:07:06.255 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.822 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.822 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806458 00:07:06.822 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.389 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.389 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806458 00:07:07.389 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.957 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.957 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806458 00:07:07.957 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.216 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.216 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806458 00:07:08.216 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.474 Initializing NVMe Controllers 00:07:08.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.474 Controller IO queue size 128, less than required. 00:07:08.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:08.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:08.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:08.474 Initialization complete. Launching workers. 00:07:08.474 ======================================================== 00:07:08.474 Latency(us) 00:07:08.474 Device Information : IOPS MiB/s Average min max 00:07:08.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002096.74 1000136.91 1007661.47 00:07:08.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003159.02 1000167.49 1010174.28 00:07:08.474 ======================================================== 00:07:08.474 Total : 256.00 0.12 1002627.88 1000136.91 1010174.28 00:07:08.474 00:07:08.733 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.733 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806458 00:07:08.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (806458) - No such process 00:07:08.733 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 806458 00:07:08.733 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:08.733 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:08.733 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:08.733 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:08.733 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:08.733 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:08.733 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:08.733 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:08.733 rmmod nvme_tcp 00:07:08.991 rmmod nvme_fabrics 00:07:08.991 rmmod nvme_keyring 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 805687 ']' 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 805687 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 805687 ']' 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 805687 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 805687 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 805687' 00:07:08.991 killing process with pid 805687 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 805687 00:07:08.991 16:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 805687 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.250 16:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.155 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.155 00:07:11.155 real 0m16.294s 00:07:11.155 user 0m29.476s 00:07:11.155 sys 0m5.464s 00:07:11.155 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.155 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.155 ************************************ 00:07:11.155 END TEST nvmf_delete_subsystem 00:07:11.155 ************************************ 00:07:11.155 16:20:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:11.155 16:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.155 16:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.155 16:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.155 ************************************ 00:07:11.155 START TEST nvmf_host_management 00:07:11.155 ************************************ 00:07:11.155 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:11.415 * Looking for test storage... 00:07:11.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.416 --rc genhtml_branch_coverage=1 00:07:11.416 --rc genhtml_function_coverage=1 00:07:11.416 --rc genhtml_legend=1 00:07:11.416 --rc geninfo_all_blocks=1 00:07:11.416 --rc geninfo_unexecuted_blocks=1 00:07:11.416 00:07:11.416 ' 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.416 --rc genhtml_branch_coverage=1 00:07:11.416 --rc genhtml_function_coverage=1 00:07:11.416 --rc genhtml_legend=1 00:07:11.416 --rc geninfo_all_blocks=1 00:07:11.416 --rc geninfo_unexecuted_blocks=1 00:07:11.416 00:07:11.416 ' 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.416 --rc genhtml_branch_coverage=1 00:07:11.416 --rc genhtml_function_coverage=1 00:07:11.416 --rc genhtml_legend=1 00:07:11.416 --rc geninfo_all_blocks=1 00:07:11.416 --rc geninfo_unexecuted_blocks=1 00:07:11.416 00:07:11.416 ' 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.416 --rc genhtml_branch_coverage=1 00:07:11.416 --rc genhtml_function_coverage=1 00:07:11.416 --rc genhtml_legend=1 00:07:11.416 --rc geninfo_all_blocks=1 00:07:11.416 --rc geninfo_unexecuted_blocks=1 00:07:11.416 00:07:11.416 ' 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.416 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.417 16:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.990 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.990 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:17.990 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:17.990 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:17.990 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:17.990 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:17.990 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:17.990 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:17.991 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:17.991 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:17.991 Found net devices under 0000:af:00.0: cvl_0_0 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:17.991 Found net devices under 0000:af:00.1: cvl_0_1 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:17.991 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:17.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:07:17.991 00:07:17.991 --- 10.0.0.2 ping statistics --- 00:07:17.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.991 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:07:17.992 00:07:17.992 --- 10.0.0.1 ping statistics --- 00:07:17.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.992 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=810458 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 810458 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 810458 ']' 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.992 [2024-12-14 16:20:47.430882] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:17.992 [2024-12-14 16:20:47.430924] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.992 [2024-12-14 16:20:47.508288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.992 [2024-12-14 16:20:47.531132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.992 [2024-12-14 16:20:47.531172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.992 [2024-12-14 16:20:47.531180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.992 [2024-12-14 16:20:47.531186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.992 [2024-12-14 16:20:47.531191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.992 [2024-12-14 16:20:47.532547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.992 [2024-12-14 16:20:47.532654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.992 [2024-12-14 16:20:47.532738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.992 [2024-12-14 16:20:47.532739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.992 [2024-12-14 16:20:47.672600] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.992 Malloc0 00:07:17.992 [2024-12-14 16:20:47.740188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=810699 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 810699 /var/tmp/bdevperf.sock 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 810699 ']' 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:17.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:17.992 { 00:07:17.992 "params": { 00:07:17.992 "name": "Nvme$subsystem", 00:07:17.992 "trtype": "$TEST_TRANSPORT", 00:07:17.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.992 "adrfam": "ipv4", 00:07:17.992 "trsvcid": "$NVMF_PORT", 00:07:17.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.992 "hdgst": ${hdgst:-false}, 00:07:17.992 "ddgst": ${ddgst:-false} 00:07:17.992 }, 00:07:17.992 "method": "bdev_nvme_attach_controller" 00:07:17.992 } 00:07:17.992 EOF 00:07:17.992 )") 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:17.992 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:17.992 "params": { 00:07:17.992 "name": "Nvme0", 00:07:17.992 "trtype": "tcp", 00:07:17.992 "traddr": "10.0.0.2", 00:07:17.992 "adrfam": "ipv4", 00:07:17.992 "trsvcid": "4420", 00:07:17.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:17.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:17.993 "hdgst": false, 00:07:17.993 "ddgst": false 00:07:17.993 }, 00:07:17.993 "method": "bdev_nvme_attach_controller" 00:07:17.993 }' 00:07:17.993 [2024-12-14 16:20:47.834836] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:17.993 [2024-12-14 16:20:47.834878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810699 ] 00:07:17.993 [2024-12-14 16:20:47.911328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.993 [2024-12-14 16:20:47.933529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.253 Running I/O for 10 seconds... 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=100 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 100 -ge 100 ']' 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:18.253 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:18.254 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.254 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.254 [2024-12-14 16:20:48.313049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.254 [2024-12-14 16:20:48.313087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.254 [2024-12-14 16:20:48.313105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.254 [2024-12-14 16:20:48.313119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.254 [2024-12-14 16:20:48.313133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9490 is same with the state(6) to be set 00:07:18.254 [2024-12-14 16:20:48.313426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.254 [2024-12-14 16:20:48.313928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.254 [2024-12-14 16:20:48.313939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.313945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.313953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.313960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.313968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.313975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.313983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.313989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.313997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 [2024-12-14 16:20:48.314409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.255 [2024-12-14 16:20:48.314416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.255 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.255 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:18.255 [2024-12-14 16:20:48.315342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:18.255 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.255 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.255 task offset: 24576 on job bdev=Nvme0n1 fails 00:07:18.255 00:07:18.255 Latency(us) 00:07:18.255 [2024-12-14T15:20:48.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.255 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:18.255 Job: Nvme0n1 ended in about 0.11 seconds with error 00:07:18.255 Verification LBA range: start 0x0 length 0x400 00:07:18.255 Nvme0n1 : 0.11 1782.18 111.39 594.06 0.00 24807.34 1669.61 26588.89 00:07:18.255 [2024-12-14T15:20:48.341Z] =================================================================================================================== 00:07:18.256 [2024-12-14T15:20:48.342Z] Total : 1782.18 111.39 594.06 0.00 24807.34 1669.61 26588.89 00:07:18.256 [2024-12-14 16:20:48.317685] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.256 [2024-12-14 16:20:48.317705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a9490 (9): Bad file descriptor 00:07:18.256 [2024-12-14 16:20:48.321926] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:18.256 [2024-12-14 16:20:48.322077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:18.256 [2024-12-14 16:20:48.322101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.256 [2024-12-14 16:20:48.322115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:18.256 [2024-12-14 16:20:48.322122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:18.256 [2024-12-14 16:20:48.322129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:18.256 [2024-12-14 16:20:48.322136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24a9490 00:07:18.256 [2024-12-14 16:20:48.322156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a9490 (9): Bad file descriptor 00:07:18.256 [2024-12-14 16:20:48.322168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:18.256 [2024-12-14 16:20:48.322175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:18.256 [2024-12-14 16:20:48.322183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:18.256 [2024-12-14 16:20:48.322192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:18.256 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.256 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 810699 00:07:19.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (810699) - No such process 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:19.633 { 00:07:19.633 "params": { 00:07:19.633 "name": "Nvme$subsystem", 00:07:19.633 "trtype": "$TEST_TRANSPORT", 00:07:19.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:19.633 "adrfam": "ipv4", 00:07:19.633 "trsvcid": "$NVMF_PORT", 00:07:19.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:19.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:19.633 "hdgst": ${hdgst:-false}, 00:07:19.633 "ddgst": ${ddgst:-false} 00:07:19.633 }, 00:07:19.633 "method": "bdev_nvme_attach_controller" 00:07:19.633 } 00:07:19.633 EOF 00:07:19.633 )") 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:19.633 16:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:19.633 "params": { 00:07:19.633 "name": "Nvme0", 00:07:19.633 "trtype": "tcp", 00:07:19.633 "traddr": "10.0.0.2", 00:07:19.633 "adrfam": "ipv4", 00:07:19.633 "trsvcid": "4420", 00:07:19.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:19.633 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:19.633 "hdgst": false, 00:07:19.633 "ddgst": false 00:07:19.633 }, 00:07:19.633 "method": "bdev_nvme_attach_controller" 00:07:19.633 }' 00:07:19.633 [2024-12-14 16:20:49.381613] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:19.633 [2024-12-14 16:20:49.381654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810939 ] 00:07:19.633 [2024-12-14 16:20:49.457365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.633 [2024-12-14 16:20:49.477849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.633 Running I/O for 1 seconds... 00:07:20.570 1999.00 IOPS, 124.94 MiB/s 00:07:20.570 Latency(us) 00:07:20.570 [2024-12-14T15:20:50.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.570 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:20.570 Verification LBA range: start 0x0 length 0x400 00:07:20.570 Nvme0n1 : 1.01 2048.94 128.06 0.00 0.00 30657.99 1466.76 26464.06 00:07:20.570 [2024-12-14T15:20:50.656Z] =================================================================================================================== 00:07:20.570 [2024-12-14T15:20:50.656Z] Total : 2048.94 128.06 0.00 0.00 30657.99 1466.76 26464.06 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:20.829 rmmod nvme_tcp 00:07:20.829 rmmod nvme_fabrics 00:07:20.829 rmmod nvme_keyring 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 810458 ']' 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 810458 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 810458 ']' 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 810458 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 810458 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 810458' 00:07:20.829 killing process with pid 810458 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 810458 00:07:20.829 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 810458 00:07:21.088 [2024-12-14 16:20:51.061365] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:21.088 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:21.088 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:21.088 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:21.088 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:21.088 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:21.088 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:21.089 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:21.089 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:21.089 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:21.089 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.089 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.089 16:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.626 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:23.626 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:23.626 00:07:23.626 real 0m11.930s 00:07:23.626 user 0m17.816s 00:07:23.626 sys 0m5.391s 00:07:23.626 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.626 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.626 ************************************ 00:07:23.626 END TEST nvmf_host_management 00:07:23.626 ************************************ 00:07:23.626 16:20:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:23.626 16:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.626 16:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:23.627 ************************************ 00:07:23.627 START TEST nvmf_lvol 00:07:23.627 ************************************ 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:23.627 * Looking for test storage... 00:07:23.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:23.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.627 --rc genhtml_branch_coverage=1 00:07:23.627 --rc genhtml_function_coverage=1 00:07:23.627 --rc genhtml_legend=1 00:07:23.627 --rc geninfo_all_blocks=1 00:07:23.627 --rc geninfo_unexecuted_blocks=1 00:07:23.627 00:07:23.627 ' 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:23.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.627 --rc genhtml_branch_coverage=1 00:07:23.627 --rc genhtml_function_coverage=1 00:07:23.627 --rc genhtml_legend=1 00:07:23.627 --rc geninfo_all_blocks=1 00:07:23.627 --rc geninfo_unexecuted_blocks=1 00:07:23.627 00:07:23.627 ' 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:23.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.627 --rc genhtml_branch_coverage=1 00:07:23.627 --rc genhtml_function_coverage=1 00:07:23.627 --rc genhtml_legend=1 00:07:23.627 --rc geninfo_all_blocks=1 00:07:23.627 --rc geninfo_unexecuted_blocks=1 00:07:23.627 00:07:23.627 ' 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:23.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.627 --rc genhtml_branch_coverage=1 00:07:23.627 --rc genhtml_function_coverage=1 00:07:23.627 --rc genhtml_legend=1 00:07:23.627 --rc geninfo_all_blocks=1 00:07:23.627 --rc geninfo_unexecuted_blocks=1 00:07:23.627 00:07:23.627 ' 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.627 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:23.628 16:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.201 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:30.202 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:30.202 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:30.202 Found net devices under 0000:af:00.0: cvl_0_0 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:30.202 Found net devices under 0000:af:00.1: cvl_0_1 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:30.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:07:30.202 00:07:30.202 --- 10.0.0.2 ping statistics --- 00:07:30.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.202 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:07:30.202 00:07:30.202 --- 10.0.0.1 ping statistics --- 00:07:30.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.202 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=814643 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 814643 00:07:30.202 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 814643 ']' 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.203 [2024-12-14 16:20:59.534750] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:30.203 [2024-12-14 16:20:59.534800] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.203 [2024-12-14 16:20:59.614817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.203 [2024-12-14 16:20:59.637806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.203 [2024-12-14 16:20:59.637843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.203 [2024-12-14 16:20:59.637850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.203 [2024-12-14 16:20:59.637856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.203 [2024-12-14 16:20:59.637861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.203 [2024-12-14 16:20:59.639137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.203 [2024-12-14 16:20:59.639243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.203 [2024-12-14 16:20:59.639244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:30.203 [2024-12-14 16:20:59.958885] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.203 16:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.203 16:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:30.203 16:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.462 16:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:30.462 16:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:30.721 16:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:30.980 16:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=42b1039f-ab20-4b11-befe-0848576ea0be 00:07:30.980 16:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 42b1039f-ab20-4b11-befe-0848576ea0be lvol 20 00:07:31.240 16:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e80645f6-f2cf-4b91-b151-0f6a6b5c0671 00:07:31.240 16:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.240 16:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e80645f6-f2cf-4b91-b151-0f6a6b5c0671 00:07:31.499 16:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:31.758 [2024-12-14 16:21:01.637106] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.758 16:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.017 16:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=815250 00:07:32.017 16:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:32.017 16:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:32.953 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e80645f6-f2cf-4b91-b151-0f6a6b5c0671 MY_SNAPSHOT 00:07:33.212 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=31d5becf-0107-47b4-80de-d2c8cabe0dde 00:07:33.212 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e80645f6-f2cf-4b91-b151-0f6a6b5c0671 30 00:07:33.470 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 31d5becf-0107-47b4-80de-d2c8cabe0dde MY_CLONE 00:07:33.729 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b6af277f-194b-4886-a4c3-c7d5deab2eb1 00:07:33.729 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b6af277f-194b-4886-a4c3-c7d5deab2eb1 00:07:34.297 16:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 815250 00:07:42.414 Initializing NVMe Controllers 00:07:42.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:42.414 Controller IO queue size 128, less than required. 00:07:42.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:42.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:42.414 Initialization complete. Launching workers. 00:07:42.414 ======================================================== 00:07:42.414 Latency(us) 00:07:42.414 Device Information : IOPS MiB/s Average min max 00:07:42.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12073.00 47.16 10607.22 1865.28 72042.50 00:07:42.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11944.60 46.66 10717.10 3075.12 64745.07 00:07:42.414 ======================================================== 00:07:42.414 Total : 24017.60 93.82 10661.87 1865.28 72042.50 00:07:42.414 00:07:42.414 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.414 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e80645f6-f2cf-4b91-b151-0f6a6b5c0671 00:07:42.673 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42b1039f-ab20-4b11-befe-0848576ea0be 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.932 rmmod nvme_tcp 00:07:42.932 rmmod nvme_fabrics 00:07:42.932 rmmod nvme_keyring 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 814643 ']' 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 814643 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 814643 ']' 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 814643 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 814643 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 814643' 00:07:42.932 killing process with pid 814643 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 814643 00:07:42.932 16:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 814643 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.192 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.098 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.098 00:07:45.098 real 0m21.916s 00:07:45.098 user 1m2.979s 00:07:45.098 sys 0m7.477s 00:07:45.098 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.098 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.098 ************************************ 00:07:45.098 END TEST nvmf_lvol 00:07:45.098 ************************************ 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.358 ************************************ 00:07:45.358 START TEST nvmf_lvs_grow 00:07:45.358 ************************************ 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.358 * Looking for test storage... 00:07:45.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.358 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.359 --rc genhtml_branch_coverage=1 00:07:45.359 --rc genhtml_function_coverage=1 00:07:45.359 --rc genhtml_legend=1 00:07:45.359 --rc geninfo_all_blocks=1 00:07:45.359 --rc geninfo_unexecuted_blocks=1 00:07:45.359 00:07:45.359 ' 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.359 --rc genhtml_branch_coverage=1 00:07:45.359 --rc genhtml_function_coverage=1 00:07:45.359 --rc genhtml_legend=1 00:07:45.359 --rc geninfo_all_blocks=1 00:07:45.359 --rc geninfo_unexecuted_blocks=1 00:07:45.359 00:07:45.359 ' 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.359 --rc genhtml_branch_coverage=1 00:07:45.359 --rc genhtml_function_coverage=1 00:07:45.359 --rc genhtml_legend=1 00:07:45.359 --rc geninfo_all_blocks=1 00:07:45.359 --rc geninfo_unexecuted_blocks=1 00:07:45.359 00:07:45.359 ' 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.359 --rc genhtml_branch_coverage=1 00:07:45.359 --rc genhtml_function_coverage=1 00:07:45.359 --rc genhtml_legend=1 00:07:45.359 --rc geninfo_all_blocks=1 00:07:45.359 --rc geninfo_unexecuted_blocks=1 00:07:45.359 00:07:45.359 ' 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.359 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.618 16:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:52.193 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:52.193 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:52.193 Found net devices under 0000:af:00.0: cvl_0_0 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:52.193 Found net devices under 0000:af:00.1: cvl_0_1 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.193 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:07:52.194 00:07:52.194 --- 10.0.0.2 ping statistics --- 00:07:52.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.194 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:07:52.194 00:07:52.194 --- 10.0.0.1 ping statistics --- 00:07:52.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.194 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=820917 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 820917 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 820917 ']' 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.194 [2024-12-14 16:21:21.480968] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:52.194 [2024-12-14 16:21:21.481008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.194 [2024-12-14 16:21:21.558154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.194 [2024-12-14 16:21:21.578741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.194 [2024-12-14 16:21:21.578776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.194 [2024-12-14 16:21:21.578783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.194 [2024-12-14 16:21:21.578789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.194 [2024-12-14 16:21:21.578794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.194 [2024-12-14 16:21:21.579265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:52.194 [2024-12-14 16:21:21.893941] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.194 ************************************ 00:07:52.194 START TEST lvs_grow_clean 00:07:52.194 ************************************ 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.194 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:52.194 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:52.194 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:52.453 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:07:52.453 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:07:52.453 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:52.713 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:52.713 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:52.713 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 lvol 150 00:07:52.713 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=86065d97-c073-4b0b-9be4-c4b88edbb059 00:07:52.713 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.713 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:52.973 [2024-12-14 16:21:22.926442] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:52.973 [2024-12-14 16:21:22.926493] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:52.973 true 00:07:52.973 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:07:52.973 16:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:53.232 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:53.232 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:53.232 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 86065d97-c073-4b0b-9be4-c4b88edbb059 00:07:53.492 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.751 [2024-12-14 16:21:23.644587] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.751 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.751 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=821401 00:07:53.751 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:53.751 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.751 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 821401 /var/tmp/bdevperf.sock 00:07:53.751 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 821401 ']' 00:07:53.751 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.751 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.751 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.751 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.751 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:54.010 [2024-12-14 16:21:23.860232] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:54.010 [2024-12-14 16:21:23.860279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821401 ] 00:07:54.010 [2024-12-14 16:21:23.935693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.010 [2024-12-14 16:21:23.958035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.010 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.010 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:54.010 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:54.579 Nvme0n1 00:07:54.579 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:54.579 [ 00:07:54.579 { 00:07:54.579 "name": "Nvme0n1", 00:07:54.579 "aliases": [ 00:07:54.579 "86065d97-c073-4b0b-9be4-c4b88edbb059" 00:07:54.579 ], 00:07:54.579 "product_name": "NVMe disk", 00:07:54.579 "block_size": 4096, 00:07:54.579 "num_blocks": 38912, 00:07:54.579 "uuid": "86065d97-c073-4b0b-9be4-c4b88edbb059", 00:07:54.579 "numa_id": 1, 00:07:54.579 "assigned_rate_limits": { 00:07:54.579 "rw_ios_per_sec": 0, 00:07:54.579 "rw_mbytes_per_sec": 0, 00:07:54.579 "r_mbytes_per_sec": 0, 00:07:54.579 "w_mbytes_per_sec": 0 00:07:54.579 }, 00:07:54.579 "claimed": false, 00:07:54.579 "zoned": false, 00:07:54.579 "supported_io_types": { 00:07:54.579 "read": true, 00:07:54.579 "write": true, 00:07:54.579 "unmap": true, 00:07:54.579 "flush": true, 00:07:54.579 "reset": true, 00:07:54.579 "nvme_admin": true, 00:07:54.579 "nvme_io": true, 00:07:54.579 "nvme_io_md": false, 00:07:54.579 "write_zeroes": true, 00:07:54.579 "zcopy": false, 00:07:54.579 "get_zone_info": false, 00:07:54.579 "zone_management": false, 00:07:54.579 "zone_append": false, 00:07:54.579 "compare": true, 00:07:54.579 "compare_and_write": true, 00:07:54.579 "abort": true, 00:07:54.579 "seek_hole": false, 00:07:54.579 "seek_data": false, 00:07:54.579 "copy": true, 00:07:54.579 "nvme_iov_md": false 00:07:54.579 }, 00:07:54.579 "memory_domains": [ 00:07:54.579 { 00:07:54.579 "dma_device_id": "system", 00:07:54.579 "dma_device_type": 1 00:07:54.579 } 00:07:54.579 ], 00:07:54.579 "driver_specific": { 00:07:54.579 "nvme": [ 00:07:54.579 { 00:07:54.579 "trid": { 00:07:54.579 "trtype": "TCP", 00:07:54.579 "adrfam": "IPv4", 00:07:54.579 "traddr": "10.0.0.2", 00:07:54.579 "trsvcid": "4420", 00:07:54.579 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:54.579 }, 00:07:54.579 "ctrlr_data": { 00:07:54.579 "cntlid": 1, 00:07:54.579 "vendor_id": "0x8086", 00:07:54.579 "model_number": "SPDK bdev Controller", 00:07:54.579 "serial_number": "SPDK0", 00:07:54.579 "firmware_revision": "25.01", 00:07:54.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.579 "oacs": { 00:07:54.579 "security": 0, 00:07:54.579 "format": 0, 00:07:54.579 "firmware": 0, 00:07:54.579 "ns_manage": 0 00:07:54.579 }, 00:07:54.579 "multi_ctrlr": true, 00:07:54.579 "ana_reporting": false 00:07:54.579 }, 00:07:54.579 "vs": { 00:07:54.579 "nvme_version": "1.3" 00:07:54.579 }, 00:07:54.579 "ns_data": { 00:07:54.579 "id": 1, 00:07:54.579 "can_share": true 00:07:54.579 } 00:07:54.579 } 00:07:54.579 ], 00:07:54.579 "mp_policy": "active_passive" 00:07:54.579 } 00:07:54.579 } 00:07:54.579 ] 00:07:54.838 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=821554 00:07:54.838 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:54.838 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.838 Running I/O for 10 seconds... 00:07:55.775 Latency(us) 00:07:55.775 [2024-12-14T15:21:25.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.775 Nvme0n1 : 1.00 23693.00 92.55 0.00 0.00 0.00 0.00 0.00 00:07:55.775 [2024-12-14T15:21:25.861Z] =================================================================================================================== 00:07:55.775 [2024-12-14T15:21:25.861Z] Total : 23693.00 92.55 0.00 0.00 0.00 0.00 0.00 00:07:55.775 00:07:56.712 16:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:07:56.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.712 Nvme0n1 : 2.00 23809.00 93.00 0.00 0.00 0.00 0.00 0.00 00:07:56.712 [2024-12-14T15:21:26.798Z] =================================================================================================================== 00:07:56.712 [2024-12-14T15:21:26.798Z] Total : 23809.00 93.00 0.00 0.00 0.00 0.00 0.00 00:07:56.712 00:07:56.971 true 00:07:56.971 16:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:07:56.971 16:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:57.230 16:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:57.230 16:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:57.230 16:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 821554 00:07:57.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.799 Nvme0n1 : 3.00 23855.67 93.19 0.00 0.00 0.00 0.00 0.00 00:07:57.799 [2024-12-14T15:21:27.885Z] =================================================================================================================== 00:07:57.799 [2024-12-14T15:21:27.885Z] Total : 23855.67 93.19 0.00 0.00 0.00 0.00 0.00 00:07:57.799 00:07:58.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.735 Nvme0n1 : 4.00 23895.25 93.34 0.00 0.00 0.00 0.00 0.00 00:07:58.735 [2024-12-14T15:21:28.821Z] =================================================================================================================== 00:07:58.735 [2024-12-14T15:21:28.821Z] Total : 23895.25 93.34 0.00 0.00 0.00 0.00 0.00 00:07:58.735 00:08:00.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.113 Nvme0n1 : 5.00 23942.80 93.53 0.00 0.00 0.00 0.00 0.00 00:08:00.113 [2024-12-14T15:21:30.199Z] =================================================================================================================== 00:08:00.113 [2024-12-14T15:21:30.199Z] Total : 23942.80 93.53 0.00 0.00 0.00 0.00 0.00 00:08:00.113 00:08:01.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.051 Nvme0n1 : 6.00 23950.17 93.56 0.00 0.00 0.00 0.00 0.00 00:08:01.051 [2024-12-14T15:21:31.137Z] =================================================================================================================== 00:08:01.051 [2024-12-14T15:21:31.137Z] Total : 23950.17 93.56 0.00 0.00 0.00 0.00 0.00 00:08:01.051 00:08:01.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.995 Nvme0n1 : 7.00 23967.14 93.62 0.00 0.00 0.00 0.00 0.00 00:08:01.995 [2024-12-14T15:21:32.081Z] =================================================================================================================== 00:08:01.995 [2024-12-14T15:21:32.081Z] Total : 23967.14 93.62 0.00 0.00 0.00 0.00 0.00 00:08:01.995 00:08:02.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.933 Nvme0n1 : 8.00 23996.00 93.73 0.00 0.00 0.00 0.00 0.00 00:08:02.933 [2024-12-14T15:21:33.019Z] =================================================================================================================== 00:08:02.933 [2024-12-14T15:21:33.019Z] Total : 23996.00 93.73 0.00 0.00 0.00 0.00 0.00 00:08:02.933 00:08:03.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.870 Nvme0n1 : 9.00 24016.33 93.81 0.00 0.00 0.00 0.00 0.00 00:08:03.870 [2024-12-14T15:21:33.956Z] =================================================================================================================== 00:08:03.870 [2024-12-14T15:21:33.956Z] Total : 24016.33 93.81 0.00 0.00 0.00 0.00 0.00 00:08:03.870 00:08:04.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.806 Nvme0n1 : 10.00 24031.00 93.87 0.00 0.00 0.00 0.00 0.00 00:08:04.806 [2024-12-14T15:21:34.892Z] =================================================================================================================== 00:08:04.806 [2024-12-14T15:21:34.892Z] Total : 24031.00 93.87 0.00 0.00 0.00 0.00 0.00 00:08:04.806 00:08:04.806 00:08:04.806 Latency(us) 00:08:04.806 [2024-12-14T15:21:34.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.806 Nvme0n1 : 10.00 24032.12 93.88 0.00 0.00 5323.26 2512.21 10111.27 00:08:04.806 [2024-12-14T15:21:34.892Z] =================================================================================================================== 00:08:04.806 [2024-12-14T15:21:34.892Z] Total : 24032.12 93.88 0.00 0.00 5323.26 2512.21 10111.27 00:08:04.806 { 00:08:04.806 "results": [ 00:08:04.806 { 00:08:04.806 "job": "Nvme0n1", 00:08:04.806 "core_mask": "0x2", 00:08:04.807 "workload": "randwrite", 00:08:04.807 "status": "finished", 00:08:04.807 "queue_depth": 128, 00:08:04.807 "io_size": 4096, 00:08:04.807 "runtime": 10.004859, 00:08:04.807 "iops": 24032.122791535592, 00:08:04.807 "mibps": 93.8754796544359, 00:08:04.807 "io_failed": 0, 00:08:04.807 "io_timeout": 0, 00:08:04.807 "avg_latency_us": 5323.255505939755, 00:08:04.807 "min_latency_us": 2512.213333333333, 00:08:04.807 "max_latency_us": 10111.26857142857 00:08:04.807 } 00:08:04.807 ], 00:08:04.807 "core_count": 1 00:08:04.807 } 00:08:04.807 16:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 821401 00:08:04.807 16:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 821401 ']' 00:08:04.807 16:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 821401 00:08:04.807 16:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:04.807 16:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.807 16:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821401 00:08:04.807 16:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:04.807 16:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:04.807 16:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821401' 00:08:04.807 killing process with pid 821401 00:08:04.807 16:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 821401 00:08:04.807 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.807 00:08:04.807 Latency(us) 00:08:04.807 [2024-12-14T15:21:34.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.807 [2024-12-14T15:21:34.893Z] =================================================================================================================== 00:08:04.807 [2024-12-14T15:21:34.893Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.807 16:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 821401 00:08:05.066 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.324 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.582 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:08:05.582 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:05.582 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:05.582 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:05.582 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:05.841 [2024-12-14 16:21:35.801212] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:05.841 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:08:06.100 request: 00:08:06.100 { 00:08:06.100 "uuid": "6d331a6d-77b1-447b-8f07-8e7494ab25f5", 00:08:06.100 "method": "bdev_lvol_get_lvstores", 00:08:06.100 "req_id": 1 00:08:06.100 } 00:08:06.100 Got JSON-RPC error response 00:08:06.100 response: 00:08:06.100 { 00:08:06.100 "code": -19, 00:08:06.100 "message": "No such device" 00:08:06.100 } 00:08:06.100 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:06.100 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.100 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:06.100 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.100 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.359 aio_bdev 00:08:06.359 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 86065d97-c073-4b0b-9be4-c4b88edbb059 00:08:06.359 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=86065d97-c073-4b0b-9be4-c4b88edbb059 00:08:06.359 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:06.359 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:06.359 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:06.359 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:06.359 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:06.359 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 86065d97-c073-4b0b-9be4-c4b88edbb059 -t 2000 00:08:06.637 [ 00:08:06.637 { 00:08:06.637 "name": "86065d97-c073-4b0b-9be4-c4b88edbb059", 00:08:06.637 "aliases": [ 00:08:06.637 "lvs/lvol" 00:08:06.637 ], 00:08:06.637 "product_name": "Logical Volume", 00:08:06.637 "block_size": 4096, 00:08:06.637 "num_blocks": 38912, 00:08:06.637 "uuid": "86065d97-c073-4b0b-9be4-c4b88edbb059", 00:08:06.637 "assigned_rate_limits": { 00:08:06.637 "rw_ios_per_sec": 0, 00:08:06.637 "rw_mbytes_per_sec": 0, 00:08:06.637 "r_mbytes_per_sec": 0, 00:08:06.637 "w_mbytes_per_sec": 0 00:08:06.637 }, 00:08:06.637 "claimed": false, 00:08:06.637 "zoned": false, 00:08:06.637 "supported_io_types": { 00:08:06.637 "read": true, 00:08:06.637 "write": true, 00:08:06.637 "unmap": true, 00:08:06.637 "flush": false, 00:08:06.637 "reset": true, 00:08:06.637 "nvme_admin": false, 00:08:06.637 "nvme_io": false, 00:08:06.637 "nvme_io_md": false, 00:08:06.637 "write_zeroes": true, 00:08:06.637 "zcopy": false, 00:08:06.637 "get_zone_info": false, 00:08:06.637 "zone_management": false, 00:08:06.637 "zone_append": false, 00:08:06.637 "compare": false, 00:08:06.637 "compare_and_write": false, 00:08:06.637 "abort": false, 00:08:06.637 "seek_hole": true, 00:08:06.637 "seek_data": true, 00:08:06.637 "copy": false, 00:08:06.637 "nvme_iov_md": false 00:08:06.637 }, 00:08:06.637 "driver_specific": { 00:08:06.637 "lvol": { 00:08:06.637 "lvol_store_uuid": "6d331a6d-77b1-447b-8f07-8e7494ab25f5", 00:08:06.637 "base_bdev": "aio_bdev", 00:08:06.637 "thin_provision": false, 00:08:06.637 "num_allocated_clusters": 38, 00:08:06.637 "snapshot": false, 00:08:06.637 "clone": false, 00:08:06.637 "esnap_clone": false 00:08:06.637 } 00:08:06.637 } 00:08:06.637 } 00:08:06.637 ] 00:08:06.637 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:06.637 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:08:06.637 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:06.924 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:06.924 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:08:06.924 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:06.924 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:06.924 16:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 86065d97-c073-4b0b-9be4-c4b88edbb059 00:08:07.236 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d331a6d-77b1-447b-8f07-8e7494ab25f5 00:08:07.554 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.554 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.554 00:08:07.554 real 0m15.656s 00:08:07.554 user 0m15.320s 00:08:07.554 sys 0m1.412s 00:08:07.554 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.554 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:07.554 ************************************ 00:08:07.554 END TEST lvs_grow_clean 00:08:07.554 ************************************ 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.813 ************************************ 00:08:07.813 START TEST lvs_grow_dirty 00:08:07.813 ************************************ 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.813 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.072 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:08.072 16:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:08.072 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:08.072 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:08.072 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:08.330 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:08.330 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:08.330 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 lvol 150 00:08:08.588 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf 00:08:08.588 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.588 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:08.588 [2024-12-14 16:21:38.657492] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:08.588 [2024-12-14 16:21:38.657545] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:08.588 true 00:08:08.846 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:08.846 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:08.846 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:08.846 16:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.104 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf 00:08:09.362 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.362 [2024-12-14 16:21:39.395693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.362 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.621 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=824060 00:08:09.621 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.621 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:09.621 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 824060 /var/tmp/bdevperf.sock 00:08:09.621 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 824060 ']' 00:08:09.621 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.621 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.621 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.621 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.621 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.621 [2024-12-14 16:21:39.635619] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:09.621 [2024-12-14 16:21:39.635665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824060 ] 00:08:09.880 [2024-12-14 16:21:39.708141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.880 [2024-12-14 16:21:39.729808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.880 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.880 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:09.880 16:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:10.138 Nvme0n1 00:08:10.138 16:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:10.397 [ 00:08:10.397 { 00:08:10.397 "name": "Nvme0n1", 00:08:10.397 "aliases": [ 00:08:10.397 "f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf" 00:08:10.397 ], 00:08:10.397 "product_name": "NVMe disk", 00:08:10.397 "block_size": 4096, 00:08:10.397 "num_blocks": 38912, 00:08:10.397 "uuid": "f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf", 00:08:10.397 "numa_id": 1, 00:08:10.397 "assigned_rate_limits": { 00:08:10.397 "rw_ios_per_sec": 0, 00:08:10.397 "rw_mbytes_per_sec": 0, 00:08:10.397 "r_mbytes_per_sec": 0, 00:08:10.397 "w_mbytes_per_sec": 0 00:08:10.397 }, 00:08:10.397 "claimed": false, 00:08:10.397 "zoned": false, 00:08:10.397 "supported_io_types": { 00:08:10.397 "read": true, 00:08:10.397 "write": true, 00:08:10.397 "unmap": true, 00:08:10.397 "flush": true, 00:08:10.397 "reset": true, 00:08:10.397 "nvme_admin": true, 00:08:10.397 "nvme_io": true, 00:08:10.397 "nvme_io_md": false, 00:08:10.397 "write_zeroes": true, 00:08:10.397 "zcopy": false, 00:08:10.397 "get_zone_info": false, 00:08:10.397 "zone_management": false, 00:08:10.397 "zone_append": false, 00:08:10.397 "compare": true, 00:08:10.397 "compare_and_write": true, 00:08:10.397 "abort": true, 00:08:10.397 "seek_hole": false, 00:08:10.397 "seek_data": false, 00:08:10.397 "copy": true, 00:08:10.397 "nvme_iov_md": false 00:08:10.397 }, 00:08:10.397 "memory_domains": [ 00:08:10.397 { 00:08:10.397 "dma_device_id": "system", 00:08:10.397 "dma_device_type": 1 00:08:10.397 } 00:08:10.397 ], 00:08:10.397 "driver_specific": { 00:08:10.397 "nvme": [ 00:08:10.397 { 00:08:10.397 "trid": { 00:08:10.397 "trtype": "TCP", 00:08:10.397 "adrfam": "IPv4", 00:08:10.397 "traddr": "10.0.0.2", 00:08:10.397 "trsvcid": "4420", 00:08:10.397 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:10.397 }, 00:08:10.397 "ctrlr_data": { 00:08:10.397 "cntlid": 1, 00:08:10.397 "vendor_id": "0x8086", 00:08:10.397 "model_number": "SPDK bdev Controller", 00:08:10.397 "serial_number": "SPDK0", 00:08:10.397 "firmware_revision": "25.01", 00:08:10.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.397 "oacs": { 00:08:10.397 "security": 0, 00:08:10.397 "format": 0, 00:08:10.397 "firmware": 0, 00:08:10.397 "ns_manage": 0 00:08:10.397 }, 00:08:10.397 "multi_ctrlr": true, 00:08:10.397 "ana_reporting": false 00:08:10.397 }, 00:08:10.397 "vs": { 00:08:10.397 "nvme_version": "1.3" 00:08:10.397 }, 00:08:10.397 "ns_data": { 00:08:10.397 "id": 1, 00:08:10.397 "can_share": true 00:08:10.397 } 00:08:10.397 } 00:08:10.397 ], 00:08:10.397 "mp_policy": "active_passive" 00:08:10.397 } 00:08:10.397 } 00:08:10.397 ] 00:08:10.397 16:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=824168 00:08:10.397 16:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:10.397 16:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:10.397 Running I/O for 10 seconds... 00:08:11.772 Latency(us) 00:08:11.772 [2024-12-14T15:21:41.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.772 Nvme0n1 : 1.00 23434.00 91.54 0.00 0.00 0.00 0.00 0.00 00:08:11.772 [2024-12-14T15:21:41.858Z] =================================================================================================================== 00:08:11.772 [2024-12-14T15:21:41.858Z] Total : 23434.00 91.54 0.00 0.00 0.00 0.00 0.00 00:08:11.772 00:08:12.338 16:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:12.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.597 Nvme0n1 : 2.00 23602.00 92.20 0.00 0.00 0.00 0.00 0.00 00:08:12.597 [2024-12-14T15:21:42.683Z] =================================================================================================================== 00:08:12.597 [2024-12-14T15:21:42.683Z] Total : 23602.00 92.20 0.00 0.00 0.00 0.00 0.00 00:08:12.597 00:08:12.597 true 00:08:12.597 16:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:12.597 16:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:12.855 16:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:12.855 16:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:12.855 16:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 824168 00:08:13.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.422 Nvme0n1 : 3.00 23675.00 92.48 0.00 0.00 0.00 0.00 0.00 00:08:13.422 [2024-12-14T15:21:43.508Z] =================================================================================================================== 00:08:13.422 [2024-12-14T15:21:43.508Z] Total : 23675.00 92.48 0.00 0.00 0.00 0.00 0.00 00:08:13.422 00:08:14.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.797 Nvme0n1 : 4.00 23702.00 92.59 0.00 0.00 0.00 0.00 0.00 00:08:14.797 [2024-12-14T15:21:44.883Z] =================================================================================================================== 00:08:14.797 [2024-12-14T15:21:44.883Z] Total : 23702.00 92.59 0.00 0.00 0.00 0.00 0.00 00:08:14.797 00:08:15.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.732 Nvme0n1 : 5.00 23750.40 92.78 0.00 0.00 0.00 0.00 0.00 00:08:15.732 [2024-12-14T15:21:45.818Z] =================================================================================================================== 00:08:15.732 [2024-12-14T15:21:45.818Z] Total : 23750.40 92.78 0.00 0.00 0.00 0.00 0.00 00:08:15.732 00:08:16.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.667 Nvme0n1 : 6.00 23792.50 92.94 0.00 0.00 0.00 0.00 0.00 00:08:16.667 [2024-12-14T15:21:46.753Z] =================================================================================================================== 00:08:16.667 [2024-12-14T15:21:46.753Z] Total : 23792.50 92.94 0.00 0.00 0.00 0.00 0.00 00:08:16.667 00:08:17.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.602 Nvme0n1 : 7.00 23820.57 93.05 0.00 0.00 0.00 0.00 0.00 00:08:17.602 [2024-12-14T15:21:47.688Z] =================================================================================================================== 00:08:17.602 [2024-12-14T15:21:47.688Z] Total : 23820.57 93.05 0.00 0.00 0.00 0.00 0.00 00:08:17.602 00:08:18.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.536 Nvme0n1 : 8.00 23843.38 93.14 0.00 0.00 0.00 0.00 0.00 00:08:18.536 [2024-12-14T15:21:48.622Z] =================================================================================================================== 00:08:18.536 [2024-12-14T15:21:48.622Z] Total : 23843.38 93.14 0.00 0.00 0.00 0.00 0.00 00:08:18.536 00:08:19.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.471 Nvme0n1 : 9.00 23842.67 93.14 0.00 0.00 0.00 0.00 0.00 00:08:19.471 [2024-12-14T15:21:49.557Z] =================================================================================================================== 00:08:19.471 [2024-12-14T15:21:49.557Z] Total : 23842.67 93.14 0.00 0.00 0.00 0.00 0.00 00:08:19.471 00:08:20.845 00:08:20.845 Latency(us) 00:08:20.845 [2024-12-14T15:21:50.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.845 Nvme0n1 : 10.00 23832.29 93.09 0.00 0.00 5367.92 2559.02 10860.25 00:08:20.845 [2024-12-14T15:21:50.931Z] =================================================================================================================== 00:08:20.845 [2024-12-14T15:21:50.931Z] Total : 23832.29 93.09 0.00 0.00 5367.92 2559.02 10860.25 00:08:20.845 { 00:08:20.845 "results": [ 00:08:20.845 { 00:08:20.845 "job": "Nvme0n1", 00:08:20.845 "core_mask": "0x2", 00:08:20.845 "workload": "randwrite", 00:08:20.845 "status": "finished", 00:08:20.845 "queue_depth": 128, 00:08:20.845 "io_size": 4096, 00:08:20.845 "runtime": 10.0016, 00:08:20.845 "iops": 23832.286834106544, 00:08:20.845 "mibps": 93.09487044572869, 00:08:20.845 "io_failed": 0, 00:08:20.845 "io_timeout": 0, 00:08:20.845 "avg_latency_us": 5367.916357106198, 00:08:20.845 "min_latency_us": 2559.024761904762, 00:08:20.845 "max_latency_us": 10860.251428571428 00:08:20.845 } 00:08:20.845 ], 00:08:20.845 "core_count": 1 00:08:20.845 } 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 824060 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 824060 ']' 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 824060 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 824060 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 824060' 00:08:20.845 killing process with pid 824060 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 824060 00:08:20.845 Received shutdown signal, test time was about 10.000000 seconds 00:08:20.845 00:08:20.845 Latency(us) 00:08:20.845 [2024-12-14T15:21:50.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.845 [2024-12-14T15:21:50.931Z] =================================================================================================================== 00:08:20.845 [2024-12-14T15:21:50.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 824060 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.845 16:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:21.104 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:21.104 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 820917 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 820917 00:08:21.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 820917 Killed "${NVMF_APP[@]}" "$@" 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=825973 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 825973 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 825973 ']' 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.362 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.362 [2024-12-14 16:21:51.391218] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:21.362 [2024-12-14 16:21:51.391262] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.621 [2024-12-14 16:21:51.467213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.621 [2024-12-14 16:21:51.488319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.621 [2024-12-14 16:21:51.488354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.622 [2024-12-14 16:21:51.488361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.622 [2024-12-14 16:21:51.488367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.622 [2024-12-14 16:21:51.488372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.622 [2024-12-14 16:21:51.488864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.622 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.622 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:21.622 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:21.622 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.622 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.622 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.622 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:21.880 [2024-12-14 16:21:51.794006] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:21.880 [2024-12-14 16:21:51.794088] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:21.880 [2024-12-14 16:21:51.794112] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:21.880 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:21.880 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf 00:08:21.880 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf 00:08:21.880 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.880 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:21.880 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.880 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.880 16:21:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:22.139 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf -t 2000 00:08:22.139 [ 00:08:22.139 { 00:08:22.139 "name": "f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf", 00:08:22.139 "aliases": [ 00:08:22.139 "lvs/lvol" 00:08:22.139 ], 00:08:22.139 "product_name": "Logical Volume", 00:08:22.139 "block_size": 4096, 00:08:22.139 "num_blocks": 38912, 00:08:22.139 "uuid": "f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf", 00:08:22.139 "assigned_rate_limits": { 00:08:22.139 "rw_ios_per_sec": 0, 00:08:22.139 "rw_mbytes_per_sec": 0, 00:08:22.139 "r_mbytes_per_sec": 0, 00:08:22.139 "w_mbytes_per_sec": 0 00:08:22.139 }, 00:08:22.139 "claimed": false, 00:08:22.139 "zoned": false, 00:08:22.139 "supported_io_types": { 00:08:22.139 "read": true, 00:08:22.139 "write": true, 00:08:22.139 "unmap": true, 00:08:22.139 "flush": false, 00:08:22.139 "reset": true, 00:08:22.139 "nvme_admin": false, 00:08:22.139 "nvme_io": false, 00:08:22.139 "nvme_io_md": false, 00:08:22.139 "write_zeroes": true, 00:08:22.139 "zcopy": false, 00:08:22.139 "get_zone_info": false, 00:08:22.139 "zone_management": false, 00:08:22.139 "zone_append": false, 00:08:22.139 "compare": false, 00:08:22.139 "compare_and_write": false, 00:08:22.139 "abort": false, 00:08:22.139 "seek_hole": true, 00:08:22.139 "seek_data": true, 00:08:22.139 "copy": false, 00:08:22.139 "nvme_iov_md": false 00:08:22.139 }, 00:08:22.139 "driver_specific": { 00:08:22.139 "lvol": { 00:08:22.139 "lvol_store_uuid": "bc0094f0-ec79-4f20-bf1a-d06ff2051044", 00:08:22.139 "base_bdev": "aio_bdev", 00:08:22.139 "thin_provision": false, 00:08:22.139 "num_allocated_clusters": 38, 00:08:22.139 "snapshot": false, 00:08:22.139 "clone": false, 00:08:22.139 "esnap_clone": false 00:08:22.139 } 00:08:22.139 } 00:08:22.139 } 00:08:22.139 ] 00:08:22.139 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:22.139 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:22.139 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:22.398 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:22.398 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:22.398 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:22.656 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:22.656 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:22.914 [2024-12-14 16:21:52.746975] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:22.914 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:22.914 request: 00:08:22.914 { 00:08:22.914 "uuid": "bc0094f0-ec79-4f20-bf1a-d06ff2051044", 00:08:22.914 "method": "bdev_lvol_get_lvstores", 00:08:22.914 "req_id": 1 00:08:22.914 } 00:08:22.915 Got JSON-RPC error response 00:08:22.915 response: 00:08:22.915 { 00:08:22.915 "code": -19, 00:08:22.915 "message": "No such device" 00:08:22.915 } 00:08:22.915 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:22.915 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.915 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:22.915 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.915 16:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.173 aio_bdev 00:08:23.173 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf 00:08:23.173 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf 00:08:23.173 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.173 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:23.173 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.173 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.173 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:23.432 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf -t 2000 00:08:23.432 [ 00:08:23.432 { 00:08:23.432 "name": "f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf", 00:08:23.432 "aliases": [ 00:08:23.432 "lvs/lvol" 00:08:23.432 ], 00:08:23.432 "product_name": "Logical Volume", 00:08:23.432 "block_size": 4096, 00:08:23.432 "num_blocks": 38912, 00:08:23.432 "uuid": "f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf", 00:08:23.432 "assigned_rate_limits": { 00:08:23.432 "rw_ios_per_sec": 0, 00:08:23.432 "rw_mbytes_per_sec": 0, 00:08:23.432 "r_mbytes_per_sec": 0, 00:08:23.432 "w_mbytes_per_sec": 0 00:08:23.432 }, 00:08:23.432 "claimed": false, 00:08:23.432 "zoned": false, 00:08:23.432 "supported_io_types": { 00:08:23.432 "read": true, 00:08:23.432 "write": true, 00:08:23.432 "unmap": true, 00:08:23.432 "flush": false, 00:08:23.432 "reset": true, 00:08:23.432 "nvme_admin": false, 00:08:23.432 "nvme_io": false, 00:08:23.432 "nvme_io_md": false, 00:08:23.432 "write_zeroes": true, 00:08:23.432 "zcopy": false, 00:08:23.432 "get_zone_info": false, 00:08:23.432 "zone_management": false, 00:08:23.432 "zone_append": false, 00:08:23.432 "compare": false, 00:08:23.432 "compare_and_write": false, 00:08:23.432 "abort": false, 00:08:23.432 "seek_hole": true, 00:08:23.432 "seek_data": true, 00:08:23.432 "copy": false, 00:08:23.432 "nvme_iov_md": false 00:08:23.432 }, 00:08:23.432 "driver_specific": { 00:08:23.432 "lvol": { 00:08:23.432 "lvol_store_uuid": "bc0094f0-ec79-4f20-bf1a-d06ff2051044", 00:08:23.432 "base_bdev": "aio_bdev", 00:08:23.432 "thin_provision": false, 00:08:23.432 "num_allocated_clusters": 38, 00:08:23.432 "snapshot": false, 00:08:23.432 "clone": false, 00:08:23.432 "esnap_clone": false 00:08:23.432 } 00:08:23.432 } 00:08:23.432 } 00:08:23.432 ] 00:08:23.432 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:23.691 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:23.691 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:23.691 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:23.691 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:23.691 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:23.949 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:23.949 16:21:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f50f6594-1e97-4bcf-8bf3-d20b1efc0fdf 00:08:24.207 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc0094f0-ec79-4f20-bf1a-d06ff2051044 00:08:24.207 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.465 00:08:24.465 real 0m16.807s 00:08:24.465 user 0m43.667s 00:08:24.465 sys 0m3.812s 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.465 ************************************ 00:08:24.465 END TEST lvs_grow_dirty 00:08:24.465 ************************************ 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:24.465 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:24.465 nvmf_trace.0 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:24.723 rmmod nvme_tcp 00:08:24.723 rmmod nvme_fabrics 00:08:24.723 rmmod nvme_keyring 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 825973 ']' 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 825973 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 825973 ']' 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 825973 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 825973 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 825973' 00:08:24.723 killing process with pid 825973 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 825973 00:08:24.723 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 825973 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.982 16:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.889 16:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.889 00:08:26.889 real 0m41.698s 00:08:26.889 user 1m4.558s 00:08:26.889 sys 0m10.140s 00:08:26.889 16:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.889 16:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.889 ************************************ 00:08:26.889 END TEST nvmf_lvs_grow 00:08:26.889 ************************************ 00:08:26.889 16:21:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:26.889 16:21:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.889 16:21:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.889 16:21:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.148 ************************************ 00:08:27.148 START TEST nvmf_bdev_io_wait 00:08:27.148 ************************************ 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:27.148 * Looking for test storage... 00:08:27.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.148 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:27.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.148 --rc genhtml_branch_coverage=1 00:08:27.149 --rc genhtml_function_coverage=1 00:08:27.149 --rc genhtml_legend=1 00:08:27.149 --rc geninfo_all_blocks=1 00:08:27.149 --rc geninfo_unexecuted_blocks=1 00:08:27.149 00:08:27.149 ' 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:27.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.149 --rc genhtml_branch_coverage=1 00:08:27.149 --rc genhtml_function_coverage=1 00:08:27.149 --rc genhtml_legend=1 00:08:27.149 --rc geninfo_all_blocks=1 00:08:27.149 --rc geninfo_unexecuted_blocks=1 00:08:27.149 00:08:27.149 ' 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:27.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.149 --rc genhtml_branch_coverage=1 00:08:27.149 --rc genhtml_function_coverage=1 00:08:27.149 --rc genhtml_legend=1 00:08:27.149 --rc geninfo_all_blocks=1 00:08:27.149 --rc geninfo_unexecuted_blocks=1 00:08:27.149 00:08:27.149 ' 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:27.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.149 --rc genhtml_branch_coverage=1 00:08:27.149 --rc genhtml_function_coverage=1 00:08:27.149 --rc genhtml_legend=1 00:08:27.149 --rc geninfo_all_blocks=1 00:08:27.149 --rc geninfo_unexecuted_blocks=1 00:08:27.149 00:08:27.149 ' 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:27.149 16:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.719 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.719 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:33.719 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:33.719 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:33.719 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:33.719 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:33.720 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:33.720 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:33.720 Found net devices under 0000:af:00.0: cvl_0_0 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:33.720 Found net devices under 0000:af:00.1: cvl_0_1 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.720 16:22:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.720 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.720 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.720 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:33.720 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.720 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.720 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.720 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.720 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:08:33.720 00:08:33.720 --- 10.0.0.2 ping statistics --- 00:08:33.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.720 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:08:33.720 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:08:33.720 00:08:33.720 --- 10.0.0.1 ping statistics --- 00:08:33.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.721 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=830179 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 830179 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 830179 ']' 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 [2024-12-14 16:22:03.267072] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:33.721 [2024-12-14 16:22:03.267123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.721 [2024-12-14 16:22:03.345325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.721 [2024-12-14 16:22:03.370010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.721 [2024-12-14 16:22:03.370049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.721 [2024-12-14 16:22:03.370056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.721 [2024-12-14 16:22:03.370062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.721 [2024-12-14 16:22:03.370067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.721 [2024-12-14 16:22:03.373573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.721 [2024-12-14 16:22:03.373599] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.721 [2024-12-14 16:22:03.373713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.721 [2024-12-14 16:22:03.373713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 [2024-12-14 16:22:03.548700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 Malloc0 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 [2024-12-14 16:22:03.603516] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=830206 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=830208 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:33.721 { 00:08:33.721 "params": { 00:08:33.721 "name": "Nvme$subsystem", 00:08:33.721 "trtype": "$TEST_TRANSPORT", 00:08:33.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.721 "adrfam": "ipv4", 00:08:33.721 "trsvcid": "$NVMF_PORT", 00:08:33.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.721 "hdgst": ${hdgst:-false}, 00:08:33.721 "ddgst": ${ddgst:-false} 00:08:33.721 }, 00:08:33.721 "method": "bdev_nvme_attach_controller" 00:08:33.721 } 00:08:33.721 EOF 00:08:33.721 )") 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=830210 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:33.721 { 00:08:33.721 "params": { 00:08:33.721 "name": "Nvme$subsystem", 00:08:33.721 "trtype": "$TEST_TRANSPORT", 00:08:33.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.721 "adrfam": "ipv4", 00:08:33.721 "trsvcid": "$NVMF_PORT", 00:08:33.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.721 "hdgst": ${hdgst:-false}, 00:08:33.721 "ddgst": ${ddgst:-false} 00:08:33.721 }, 00:08:33.721 "method": "bdev_nvme_attach_controller" 00:08:33.721 } 00:08:33.721 EOF 00:08:33.721 )") 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=830213 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:33.721 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:33.722 { 00:08:33.722 "params": { 00:08:33.722 "name": "Nvme$subsystem", 00:08:33.722 "trtype": "$TEST_TRANSPORT", 00:08:33.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.722 "adrfam": "ipv4", 00:08:33.722 "trsvcid": "$NVMF_PORT", 00:08:33.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.722 "hdgst": ${hdgst:-false}, 00:08:33.722 "ddgst": ${ddgst:-false} 00:08:33.722 }, 00:08:33.722 "method": "bdev_nvme_attach_controller" 00:08:33.722 } 00:08:33.722 EOF 00:08:33.722 )") 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:33.722 { 00:08:33.722 "params": { 00:08:33.722 "name": "Nvme$subsystem", 00:08:33.722 "trtype": "$TEST_TRANSPORT", 00:08:33.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.722 "adrfam": "ipv4", 00:08:33.722 "trsvcid": "$NVMF_PORT", 00:08:33.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.722 "hdgst": ${hdgst:-false}, 00:08:33.722 "ddgst": ${ddgst:-false} 00:08:33.722 }, 00:08:33.722 "method": "bdev_nvme_attach_controller" 00:08:33.722 } 00:08:33.722 EOF 00:08:33.722 )") 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 830206 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:33.722 "params": { 00:08:33.722 "name": "Nvme1", 00:08:33.722 "trtype": "tcp", 00:08:33.722 "traddr": "10.0.0.2", 00:08:33.722 "adrfam": "ipv4", 00:08:33.722 "trsvcid": "4420", 00:08:33.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:33.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:33.722 "hdgst": false, 00:08:33.722 "ddgst": false 00:08:33.722 }, 00:08:33.722 "method": "bdev_nvme_attach_controller" 00:08:33.722 }' 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:33.722 "params": { 00:08:33.722 "name": "Nvme1", 00:08:33.722 "trtype": "tcp", 00:08:33.722 "traddr": "10.0.0.2", 00:08:33.722 "adrfam": "ipv4", 00:08:33.722 "trsvcid": "4420", 00:08:33.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:33.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:33.722 "hdgst": false, 00:08:33.722 "ddgst": false 00:08:33.722 }, 00:08:33.722 "method": "bdev_nvme_attach_controller" 00:08:33.722 }' 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:33.722 "params": { 00:08:33.722 "name": "Nvme1", 00:08:33.722 "trtype": "tcp", 00:08:33.722 "traddr": "10.0.0.2", 00:08:33.722 "adrfam": "ipv4", 00:08:33.722 "trsvcid": "4420", 00:08:33.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:33.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:33.722 "hdgst": false, 00:08:33.722 "ddgst": false 00:08:33.722 }, 00:08:33.722 "method": "bdev_nvme_attach_controller" 00:08:33.722 }' 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:33.722 16:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:33.722 "params": { 00:08:33.722 "name": "Nvme1", 00:08:33.722 "trtype": "tcp", 00:08:33.722 "traddr": "10.0.0.2", 00:08:33.722 "adrfam": "ipv4", 00:08:33.722 "trsvcid": "4420", 00:08:33.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:33.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:33.722 "hdgst": false, 00:08:33.722 "ddgst": false 00:08:33.722 }, 00:08:33.722 "method": "bdev_nvme_attach_controller" 00:08:33.722 }' 00:08:33.722 [2024-12-14 16:22:03.655503] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:33.722 [2024-12-14 16:22:03.655548] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:33.722 [2024-12-14 16:22:03.657492] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:33.722 [2024-12-14 16:22:03.657491] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:33.722 [2024-12-14 16:22:03.657542] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-14 16:22:03.657543] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:33.722 --proc-type=auto ] 00:08:33.722 [2024-12-14 16:22:03.659294] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:33.722 [2024-12-14 16:22:03.659335] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:33.981 [2024-12-14 16:22:03.810059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.981 [2024-12-14 16:22:03.824625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:33.981 [2024-12-14 16:22:03.908400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.981 [2024-12-14 16:22:03.925529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:33.981 [2024-12-14 16:22:04.003683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.981 [2024-12-14 16:22:04.026946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:33.981 [2024-12-14 16:22:04.063199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.240 [2024-12-14 16:22:04.079045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:34.240 Running I/O for 1 seconds... 00:08:34.240 Running I/O for 1 seconds... 00:08:34.240 Running I/O for 1 seconds... 00:08:34.498 Running I/O for 1 seconds... 00:08:35.435 11905.00 IOPS, 46.50 MiB/s 00:08:35.435 Latency(us) 00:08:35.435 [2024-12-14T15:22:05.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.435 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:35.435 Nvme1n1 : 1.01 11949.87 46.68 0.00 0.00 10671.10 6147.90 15978.30 00:08:35.435 [2024-12-14T15:22:05.521Z] =================================================================================================================== 00:08:35.435 [2024-12-14T15:22:05.521Z] Total : 11949.87 46.68 0.00 0.00 10671.10 6147.90 15978.30 00:08:35.435 243624.00 IOPS, 951.66 MiB/s 00:08:35.435 Latency(us) 00:08:35.435 [2024-12-14T15:22:05.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.435 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:35.435 Nvme1n1 : 1.00 243250.83 950.20 0.00 0.00 523.47 222.35 1513.57 00:08:35.435 [2024-12-14T15:22:05.521Z] =================================================================================================================== 00:08:35.435 [2024-12-14T15:22:05.521Z] Total : 243250.83 950.20 0.00 0.00 523.47 222.35 1513.57 00:08:35.435 10033.00 IOPS, 39.19 MiB/s 00:08:35.435 Latency(us) 00:08:35.435 [2024-12-14T15:22:05.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.435 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:35.435 Nvme1n1 : 1.01 10100.21 39.45 0.00 0.00 12627.56 5617.37 21720.50 00:08:35.435 [2024-12-14T15:22:05.521Z] =================================================================================================================== 00:08:35.435 [2024-12-14T15:22:05.521Z] Total : 10100.21 39.45 0.00 0.00 12627.56 5617.37 21720.50 00:08:35.435 10974.00 IOPS, 42.87 MiB/s 00:08:35.435 Latency(us) 00:08:35.435 [2024-12-14T15:22:05.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.435 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:35.435 Nvme1n1 : 1.01 11056.90 43.19 0.00 0.00 11547.47 3370.42 24716.43 00:08:35.435 [2024-12-14T15:22:05.521Z] =================================================================================================================== 00:08:35.435 [2024-12-14T15:22:05.521Z] Total : 11056.90 43.19 0.00 0.00 11547.47 3370.42 24716.43 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 830208 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 830210 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 830213 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.435 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.435 rmmod nvme_tcp 00:08:35.435 rmmod nvme_fabrics 00:08:35.435 rmmod nvme_keyring 00:08:35.694 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.694 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:35.694 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:35.694 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 830179 ']' 00:08:35.694 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 830179 00:08:35.694 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 830179 ']' 00:08:35.694 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 830179 00:08:35.694 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 830179 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 830179' 00:08:35.695 killing process with pid 830179 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 830179 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 830179 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.695 16:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.231 16:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.231 00:08:38.231 real 0m10.802s 00:08:38.231 user 0m15.969s 00:08:38.231 sys 0m6.247s 00:08:38.231 16:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.231 16:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.231 ************************************ 00:08:38.231 END TEST nvmf_bdev_io_wait 00:08:38.231 ************************************ 00:08:38.231 16:22:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:38.231 16:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.231 16:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.231 16:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.231 ************************************ 00:08:38.231 START TEST nvmf_queue_depth 00:08:38.231 ************************************ 00:08:38.231 16:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:38.231 * Looking for test storage... 00:08:38.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.231 16:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:38.231 16:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:38.231 16:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:38.231 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:38.231 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.231 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.231 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.231 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.231 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.231 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.231 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.231 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.231 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:38.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.232 --rc genhtml_branch_coverage=1 00:08:38.232 --rc genhtml_function_coverage=1 00:08:38.232 --rc genhtml_legend=1 00:08:38.232 --rc geninfo_all_blocks=1 00:08:38.232 --rc geninfo_unexecuted_blocks=1 00:08:38.232 00:08:38.232 ' 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:38.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.232 --rc genhtml_branch_coverage=1 00:08:38.232 --rc genhtml_function_coverage=1 00:08:38.232 --rc genhtml_legend=1 00:08:38.232 --rc geninfo_all_blocks=1 00:08:38.232 --rc geninfo_unexecuted_blocks=1 00:08:38.232 00:08:38.232 ' 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:38.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.232 --rc genhtml_branch_coverage=1 00:08:38.232 --rc genhtml_function_coverage=1 00:08:38.232 --rc genhtml_legend=1 00:08:38.232 --rc geninfo_all_blocks=1 00:08:38.232 --rc geninfo_unexecuted_blocks=1 00:08:38.232 00:08:38.232 ' 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:38.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.232 --rc genhtml_branch_coverage=1 00:08:38.232 --rc genhtml_function_coverage=1 00:08:38.232 --rc genhtml_legend=1 00:08:38.232 --rc geninfo_all_blocks=1 00:08:38.232 --rc geninfo_unexecuted_blocks=1 00:08:38.232 00:08:38.232 ' 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:38.232 16:22:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:44.803 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:44.803 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:44.803 Found net devices under 0000:af:00.0: cvl_0_0 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:44.803 Found net devices under 0000:af:00.1: cvl_0_1 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.803 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:44.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:08:44.804 00:08:44.804 --- 10.0.0.2 ping statistics --- 00:08:44.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.804 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:08:44.804 00:08:44.804 --- 10.0.0.1 ping statistics --- 00:08:44.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.804 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.804 16:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=834028 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 834028 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 834028 ']' 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.804 [2024-12-14 16:22:14.056569] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:44.804 [2024-12-14 16:22:14.056613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.804 [2024-12-14 16:22:14.138553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.804 [2024-12-14 16:22:14.159712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.804 [2024-12-14 16:22:14.159745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.804 [2024-12-14 16:22:14.159752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.804 [2024-12-14 16:22:14.159758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.804 [2024-12-14 16:22:14.159764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.804 [2024-12-14 16:22:14.160248] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.804 [2024-12-14 16:22:14.303716] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.804 Malloc0 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.804 [2024-12-14 16:22:14.353786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=834170 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 834170 /var/tmp/bdevperf.sock 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 834170 ']' 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:44.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.804 [2024-12-14 16:22:14.403206] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:44.804 [2024-12-14 16:22:14.403247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834170 ] 00:08:44.804 [2024-12-14 16:22:14.476163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.804 [2024-12-14 16:22:14.498148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.804 NVMe0n1 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.804 16:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:44.804 Running I/O for 10 seconds... 00:08:47.117 12096.00 IOPS, 47.25 MiB/s [2024-12-14T15:22:17.825Z] 12281.50 IOPS, 47.97 MiB/s [2024-12-14T15:22:19.202Z] 12290.00 IOPS, 48.01 MiB/s [2024-12-14T15:22:20.138Z] 12363.75 IOPS, 48.30 MiB/s [2024-12-14T15:22:21.220Z] 12473.80 IOPS, 48.73 MiB/s [2024-12-14T15:22:22.191Z] 12453.83 IOPS, 48.65 MiB/s [2024-12-14T15:22:23.126Z] 12473.71 IOPS, 48.73 MiB/s [2024-12-14T15:22:24.061Z] 12517.88 IOPS, 48.90 MiB/s [2024-12-14T15:22:24.998Z] 12511.67 IOPS, 48.87 MiB/s [2024-12-14T15:22:24.998Z] 12547.20 IOPS, 49.01 MiB/s 00:08:54.912 Latency(us) 00:08:54.912 [2024-12-14T15:22:24.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.912 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:54.912 Verification LBA range: start 0x0 length 0x4000 00:08:54.912 NVMe0n1 : 10.07 12567.23 49.09 0.00 0.00 81186.96 18724.57 53677.10 00:08:54.912 [2024-12-14T15:22:24.998Z] =================================================================================================================== 00:08:54.912 [2024-12-14T15:22:24.998Z] Total : 12567.23 49.09 0.00 0.00 81186.96 18724.57 53677.10 00:08:54.912 { 00:08:54.912 "results": [ 00:08:54.912 { 00:08:54.912 "job": "NVMe0n1", 00:08:54.912 "core_mask": "0x1", 00:08:54.912 "workload": "verify", 00:08:54.912 "status": "finished", 00:08:54.912 "verify_range": { 00:08:54.912 "start": 0, 00:08:54.912 "length": 16384 00:08:54.912 }, 00:08:54.912 "queue_depth": 1024, 00:08:54.912 "io_size": 4096, 00:08:54.912 "runtime": 10.065543, 00:08:54.912 "iops": 12567.230600475305, 00:08:54.912 "mibps": 49.09074453310666, 00:08:54.912 "io_failed": 0, 00:08:54.912 "io_timeout": 0, 00:08:54.912 "avg_latency_us": 81186.9574625661, 00:08:54.912 "min_latency_us": 18724.571428571428, 00:08:54.912 "max_latency_us": 53677.10476190476 00:08:54.912 } 00:08:54.912 ], 00:08:54.912 "core_count": 1 00:08:54.912 } 00:08:54.912 16:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 834170 00:08:54.912 16:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 834170 ']' 00:08:54.912 16:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 834170 00:08:54.912 16:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:54.912 16:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.912 16:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 834170 00:08:54.912 16:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.912 16:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.912 16:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 834170' 00:08:54.912 killing process with pid 834170 00:08:54.912 16:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 834170 00:08:54.912 Received shutdown signal, test time was about 10.000000 seconds 00:08:54.912 00:08:54.912 Latency(us) 00:08:54.912 [2024-12-14T15:22:24.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.912 [2024-12-14T15:22:24.998Z] =================================================================================================================== 00:08:54.912 [2024-12-14T15:22:24.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:54.912 16:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 834170 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.173 rmmod nvme_tcp 00:08:55.173 rmmod nvme_fabrics 00:08:55.173 rmmod nvme_keyring 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 834028 ']' 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 834028 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 834028 ']' 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 834028 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 834028 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 834028' 00:08:55.173 killing process with pid 834028 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 834028 00:08:55.173 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 834028 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.433 16:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:57.969 00:08:57.969 real 0m19.603s 00:08:57.969 user 0m22.958s 00:08:57.969 sys 0m6.032s 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.969 ************************************ 00:08:57.969 END TEST nvmf_queue_depth 00:08:57.969 ************************************ 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.969 ************************************ 00:08:57.969 START TEST nvmf_target_multipath 00:08:57.969 ************************************ 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:57.969 * Looking for test storage... 00:08:57.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:57.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.969 --rc genhtml_branch_coverage=1 00:08:57.969 --rc genhtml_function_coverage=1 00:08:57.969 --rc genhtml_legend=1 00:08:57.969 --rc geninfo_all_blocks=1 00:08:57.969 --rc geninfo_unexecuted_blocks=1 00:08:57.969 00:08:57.969 ' 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:57.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.969 --rc genhtml_branch_coverage=1 00:08:57.969 --rc genhtml_function_coverage=1 00:08:57.969 --rc genhtml_legend=1 00:08:57.969 --rc geninfo_all_blocks=1 00:08:57.969 --rc geninfo_unexecuted_blocks=1 00:08:57.969 00:08:57.969 ' 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:57.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.969 --rc genhtml_branch_coverage=1 00:08:57.969 --rc genhtml_function_coverage=1 00:08:57.969 --rc genhtml_legend=1 00:08:57.969 --rc geninfo_all_blocks=1 00:08:57.969 --rc geninfo_unexecuted_blocks=1 00:08:57.969 00:08:57.969 ' 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:57.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.969 --rc genhtml_branch_coverage=1 00:08:57.969 --rc genhtml_function_coverage=1 00:08:57.969 --rc genhtml_legend=1 00:08:57.969 --rc geninfo_all_blocks=1 00:08:57.969 --rc geninfo_unexecuted_blocks=1 00:08:57.969 00:08:57.969 ' 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.969 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:57.970 16:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:04.536 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.536 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:04.536 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:04.537 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:04.537 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:04.537 Found net devices under 0000:af:00.0: cvl_0_0 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:04.537 Found net devices under 0000:af:00.1: cvl_0_1 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:04.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:09:04.537 00:09:04.537 --- 10.0.0.2 ping statistics --- 00:09:04.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.537 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:09:04.537 00:09:04.537 --- 10.0.0.1 ping statistics --- 00:09:04.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.537 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.537 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:04.538 only one NIC for nvmf test 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.538 rmmod nvme_tcp 00:09:04.538 rmmod nvme_fabrics 00:09:04.538 rmmod nvme_keyring 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.538 16:22:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.921 00:09:05.921 real 0m8.427s 00:09:05.921 user 0m1.845s 00:09:05.921 sys 0m4.492s 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.921 16:22:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:05.921 ************************************ 00:09:05.921 END TEST nvmf_target_multipath 00:09:05.921 ************************************ 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.180 ************************************ 00:09:06.180 START TEST nvmf_zcopy 00:09:06.180 ************************************ 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:06.180 * Looking for test storage... 00:09:06.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:06.180 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:06.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.181 --rc genhtml_branch_coverage=1 00:09:06.181 --rc genhtml_function_coverage=1 00:09:06.181 --rc genhtml_legend=1 00:09:06.181 --rc geninfo_all_blocks=1 00:09:06.181 --rc geninfo_unexecuted_blocks=1 00:09:06.181 00:09:06.181 ' 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:06.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.181 --rc genhtml_branch_coverage=1 00:09:06.181 --rc genhtml_function_coverage=1 00:09:06.181 --rc genhtml_legend=1 00:09:06.181 --rc geninfo_all_blocks=1 00:09:06.181 --rc geninfo_unexecuted_blocks=1 00:09:06.181 00:09:06.181 ' 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:06.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.181 --rc genhtml_branch_coverage=1 00:09:06.181 --rc genhtml_function_coverage=1 00:09:06.181 --rc genhtml_legend=1 00:09:06.181 --rc geninfo_all_blocks=1 00:09:06.181 --rc geninfo_unexecuted_blocks=1 00:09:06.181 00:09:06.181 ' 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:06.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.181 --rc genhtml_branch_coverage=1 00:09:06.181 --rc genhtml_function_coverage=1 00:09:06.181 --rc genhtml_legend=1 00:09:06.181 --rc geninfo_all_blocks=1 00:09:06.181 --rc geninfo_unexecuted_blocks=1 00:09:06.181 00:09:06.181 ' 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.181 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.440 16:22:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:13.007 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:13.007 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:13.007 Found net devices under 0000:af:00.0: cvl_0_0 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:13.007 Found net devices under 0000:af:00.1: cvl_0_1 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.007 16:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.007 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.007 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.007 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.007 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.007 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.007 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.007 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.007 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:09:13.007 00:09:13.007 --- 10.0.0.2 ping statistics --- 00:09:13.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.007 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:09:13.007 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:09:13.007 00:09:13.007 --- 10.0.0.1 ping statistics --- 00:09:13.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.007 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:09:13.007 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=842914 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 842914 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 842914 ']' 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.008 [2024-12-14 16:22:42.307391] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:13.008 [2024-12-14 16:22:42.307438] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.008 [2024-12-14 16:22:42.387524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.008 [2024-12-14 16:22:42.408475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.008 [2024-12-14 16:22:42.408513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.008 [2024-12-14 16:22:42.408520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.008 [2024-12-14 16:22:42.408526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.008 [2024-12-14 16:22:42.408531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.008 [2024-12-14 16:22:42.408987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.008 [2024-12-14 16:22:42.539399] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.008 [2024-12-14 16:22:42.563584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.008 malloc0 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:13.008 { 00:09:13.008 "params": { 00:09:13.008 "name": "Nvme$subsystem", 00:09:13.008 "trtype": "$TEST_TRANSPORT", 00:09:13.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.008 "adrfam": "ipv4", 00:09:13.008 "trsvcid": "$NVMF_PORT", 00:09:13.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.008 "hdgst": ${hdgst:-false}, 00:09:13.008 "ddgst": ${ddgst:-false} 00:09:13.008 }, 00:09:13.008 "method": "bdev_nvme_attach_controller" 00:09:13.008 } 00:09:13.008 EOF 00:09:13.008 )") 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:13.008 16:22:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:13.008 "params": { 00:09:13.008 "name": "Nvme1", 00:09:13.008 "trtype": "tcp", 00:09:13.008 "traddr": "10.0.0.2", 00:09:13.008 "adrfam": "ipv4", 00:09:13.008 "trsvcid": "4420", 00:09:13.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.008 "hdgst": false, 00:09:13.008 "ddgst": false 00:09:13.008 }, 00:09:13.008 "method": "bdev_nvme_attach_controller" 00:09:13.008 }' 00:09:13.008 [2024-12-14 16:22:42.647826] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:13.008 [2024-12-14 16:22:42.647868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842934 ] 00:09:13.008 [2024-12-14 16:22:42.721334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.008 [2024-12-14 16:22:42.743716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.008 Running I/O for 10 seconds... 00:09:15.318 8736.00 IOPS, 68.25 MiB/s [2024-12-14T15:22:46.339Z] 8804.50 IOPS, 68.79 MiB/s [2024-12-14T15:22:47.273Z] 8842.33 IOPS, 69.08 MiB/s [2024-12-14T15:22:48.208Z] 8848.75 IOPS, 69.13 MiB/s [2024-12-14T15:22:49.142Z] 8850.60 IOPS, 69.15 MiB/s [2024-12-14T15:22:50.077Z] 8855.33 IOPS, 69.18 MiB/s [2024-12-14T15:22:51.452Z] 8864.29 IOPS, 69.25 MiB/s [2024-12-14T15:22:52.386Z] 8864.12 IOPS, 69.25 MiB/s [2024-12-14T15:22:53.321Z] 8864.22 IOPS, 69.25 MiB/s [2024-12-14T15:22:53.321Z] 8865.70 IOPS, 69.26 MiB/s 00:09:23.235 Latency(us) 00:09:23.235 [2024-12-14T15:22:53.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.235 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:23.235 Verification LBA range: start 0x0 length 0x1000 00:09:23.235 Nvme1n1 : 10.01 8869.03 69.29 0.00 0.00 14391.37 2293.76 24217.11 00:09:23.235 [2024-12-14T15:22:53.321Z] =================================================================================================================== 00:09:23.235 [2024-12-14T15:22:53.321Z] Total : 8869.03 69.29 0.00 0.00 14391.37 2293.76 24217.11 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=844716 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:23.235 { 00:09:23.235 "params": { 00:09:23.235 "name": "Nvme$subsystem", 00:09:23.235 "trtype": "$TEST_TRANSPORT", 00:09:23.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.235 "adrfam": "ipv4", 00:09:23.235 "trsvcid": "$NVMF_PORT", 00:09:23.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.235 "hdgst": ${hdgst:-false}, 00:09:23.235 "ddgst": ${ddgst:-false} 00:09:23.235 }, 00:09:23.235 "method": "bdev_nvme_attach_controller" 00:09:23.235 } 00:09:23.235 EOF 00:09:23.235 )") 00:09:23.235 [2024-12-14 16:22:53.209807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.235 [2024-12-14 16:22:53.209839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:23.235 16:22:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:23.235 "params": { 00:09:23.235 "name": "Nvme1", 00:09:23.235 "trtype": "tcp", 00:09:23.235 "traddr": "10.0.0.2", 00:09:23.235 "adrfam": "ipv4", 00:09:23.235 "trsvcid": "4420", 00:09:23.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.235 "hdgst": false, 00:09:23.235 "ddgst": false 00:09:23.235 }, 00:09:23.235 "method": "bdev_nvme_attach_controller" 00:09:23.235 }' 00:09:23.235 [2024-12-14 16:22:53.221807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.235 [2024-12-14 16:22:53.221821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.235 [2024-12-14 16:22:53.233835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.235 [2024-12-14 16:22:53.233845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.235 [2024-12-14 16:22:53.245863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.235 [2024-12-14 16:22:53.245872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.235 [2024-12-14 16:22:53.251378] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:23.235 [2024-12-14 16:22:53.251423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844716 ] 00:09:23.235 [2024-12-14 16:22:53.257895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.235 [2024-12-14 16:22:53.257906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.235 [2024-12-14 16:22:53.269925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.235 [2024-12-14 16:22:53.269935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.235 [2024-12-14 16:22:53.281958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.235 [2024-12-14 16:22:53.281968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.235 [2024-12-14 16:22:53.293990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.235 [2024-12-14 16:22:53.294000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.235 [2024-12-14 16:22:53.306021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.235 [2024-12-14 16:22:53.306030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.235 [2024-12-14 16:22:53.318055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.235 [2024-12-14 16:22:53.318065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.494 [2024-12-14 16:22:53.326142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.494 [2024-12-14 16:22:53.330095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.494 [2024-12-14 16:22:53.330106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.494 [2024-12-14 16:22:53.342139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.494 [2024-12-14 16:22:53.342161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.494 [2024-12-14 16:22:53.348648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.494 [2024-12-14 16:22:53.354161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.494 [2024-12-14 16:22:53.354173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.494 [2024-12-14 16:22:53.366203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.494 [2024-12-14 16:22:53.366222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.494 [2024-12-14 16:22:53.378234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.494 [2024-12-14 16:22:53.378249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.390263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.390276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.402300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.402311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.414335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.414347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.426374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.426384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.438406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.438425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.450435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.450453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.462466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.462480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.474497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.474511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.486527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.486538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.498561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.498571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.510592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.510601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.522630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.522644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.534660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.534669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.546693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.546702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.558727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.558737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.495 [2024-12-14 16:22:53.570760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.495 [2024-12-14 16:22:53.570770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.582806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.582822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 Running I/O for 5 seconds... 00:09:23.753 [2024-12-14 16:22:53.597231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.597249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.611728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.611746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.625399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.625417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.639492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.639510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.653239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.653257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.667170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.667188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.680866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.680886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.694503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.694522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.708241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.708259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.721770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.721787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.735754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.735772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.749004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.749023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.762686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.762706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.776354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.776373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.789851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.789870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.803408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.803426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.816910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.816929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.753 [2024-12-14 16:22:53.830754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.753 [2024-12-14 16:22:53.830772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.844654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.844673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.858344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.858363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.872040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.872059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.885965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.885983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.899511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.899529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.913006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.913025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.926825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.926844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.940193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.940211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.953524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.953544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.967446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.967464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.981085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.981103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:53.994754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:53.994772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:54.008173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:54.008192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:54.021780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:54.021800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:54.035879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:54.035898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:54.046735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:54.046754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:54.060888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:54.060907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:54.074920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:54.074938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.012 [2024-12-14 16:22:54.088961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.012 [2024-12-14 16:22:54.088979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.102749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.102767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.116553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.116576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.130139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.130157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.143829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.143846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.157566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.157599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.171687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.171705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.185732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.185750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.199467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.199485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.213243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.213261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.226774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.226792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.240311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.240329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.253893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.253911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.267702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.267720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.281552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.281577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.295115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.295140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.308652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.308670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.322078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.322096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.336268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.336286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.271 [2024-12-14 16:22:54.349914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.271 [2024-12-14 16:22:54.349932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.363912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.363930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.377688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.377706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.391368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.391387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.405068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.405086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.419134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.419153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.432920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.432938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.446619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.446637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.460490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.460508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.474903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.474926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.488774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.488807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.502552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.502576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.530 [2024-12-14 16:22:54.516707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.530 [2024-12-14 16:22:54.516726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.531 [2024-12-14 16:22:54.530656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.531 [2024-12-14 16:22:54.530674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.531 [2024-12-14 16:22:54.544511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.531 [2024-12-14 16:22:54.544529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.531 [2024-12-14 16:22:54.558591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.531 [2024-12-14 16:22:54.558609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.531 [2024-12-14 16:22:54.570311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.531 [2024-12-14 16:22:54.570329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.531 [2024-12-14 16:22:54.584512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.531 [2024-12-14 16:22:54.584530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.531 17021.00 IOPS, 132.98 MiB/s [2024-12-14T15:22:54.617Z] [2024-12-14 16:22:54.598049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.531 [2024-12-14 16:22:54.598067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.531 [2024-12-14 16:22:54.611659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.531 [2024-12-14 16:22:54.611677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.625376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.625394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.639114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.639132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.652755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.652773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.666716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.666734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.680797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.680815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.691303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.691321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.705366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.705383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.719007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.719028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.732724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.732753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.746796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.746814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.760570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.760603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.774269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.774286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.789246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.789265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.805305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.805323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.819125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.819143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.832804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.832822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.846226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.846245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.860298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.860316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.790 [2024-12-14 16:22:54.873985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.790 [2024-12-14 16:22:54.874003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.051 [2024-12-14 16:22:54.887496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.051 [2024-12-14 16:22:54.887514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.051 [2024-12-14 16:22:54.901198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.051 [2024-12-14 16:22:54.901216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.051 [2024-12-14 16:22:54.914982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.051 [2024-12-14 16:22:54.915000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.051 [2024-12-14 16:22:54.928661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.051 [2024-12-14 16:22:54.928680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.051 [2024-12-14 16:22:54.942614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:54.942632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:54.956354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:54.956372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:54.970215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:54.970232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:54.983711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:54.983732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:54.997219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:54.997237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:55.010850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:55.010869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:55.024451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:55.024469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:55.038118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:55.038136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:55.052207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:55.052227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:55.062065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:55.062084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:55.076083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:55.076100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:55.089964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:55.089982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:55.103696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:55.103715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:55.117460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:55.117479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.052 [2024-12-14 16:22:55.131139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.052 [2024-12-14 16:22:55.131158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.144958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.144976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.158742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.158761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.172615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.172633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.186380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.186398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.200246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.200270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.213969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.213991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.227760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.227778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.241656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.241678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.255199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.255218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.268838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.268855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.282378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.282396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.295809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.295838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.309784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.309802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.323651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.323669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.337685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.337705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.351081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.351099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.365045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.365064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.378869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.378887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.313 [2024-12-14 16:22:55.393015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.313 [2024-12-14 16:22:55.393034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.406674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.406693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.420324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.420342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.434364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.434382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.445278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.445296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.459584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.459602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.473262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.473281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.486976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.486996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.500797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.500815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.514565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.514584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.528670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.528689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.542390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.542409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.556133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.556152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.569900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.569919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.583530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.583548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 17013.00 IOPS, 132.91 MiB/s [2024-12-14T15:22:55.658Z] [2024-12-14 16:22:55.597250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.597270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.611067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.611086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.625134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.625153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.638154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.638172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.572 [2024-12-14 16:22:55.652136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.572 [2024-12-14 16:22:55.652154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.666103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.666123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.679823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.679842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.694232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.694250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.704913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.704931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.719478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.719497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.732746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.732765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.746484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.746502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.760210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.760229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.773783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.773801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.787487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.787507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.801059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.801077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.814877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.814896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.829049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.829068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.842499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.830 [2024-12-14 16:22:55.842517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.830 [2024-12-14 16:22:55.856258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.831 [2024-12-14 16:22:55.856276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.831 [2024-12-14 16:22:55.869525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.831 [2024-12-14 16:22:55.869543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.831 [2024-12-14 16:22:55.883251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.831 [2024-12-14 16:22:55.883269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.831 [2024-12-14 16:22:55.896716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.831 [2024-12-14 16:22:55.896735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.831 [2024-12-14 16:22:55.910564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.831 [2024-12-14 16:22:55.910583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.089 [2024-12-14 16:22:55.924014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.089 [2024-12-14 16:22:55.924033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.089 [2024-12-14 16:22:55.937993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.089 [2024-12-14 16:22:55.938012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.089 [2024-12-14 16:22:55.951378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.089 [2024-12-14 16:22:55.951396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.089 [2024-12-14 16:22:55.965577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:55.965596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:55.978945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:55.978964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:55.992789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:55.992808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.006808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.006827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.020901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.020919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.034811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.034830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.048593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.048612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.061615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.061634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.075386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.075404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.089009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.089027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.102640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.102659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.116406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.116425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.130202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.130221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.143629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.143647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.157488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.157506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.090 [2024-12-14 16:22:56.171163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.090 [2024-12-14 16:22:56.171182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.348 [2024-12-14 16:22:56.184942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.348 [2024-12-14 16:22:56.184960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.348 [2024-12-14 16:22:56.198533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.348 [2024-12-14 16:22:56.198552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.348 [2024-12-14 16:22:56.212660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.348 [2024-12-14 16:22:56.212678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.348 [2024-12-14 16:22:56.226302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.348 [2024-12-14 16:22:56.226322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.348 [2024-12-14 16:22:56.240452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.348 [2024-12-14 16:22:56.240472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.348 [2024-12-14 16:22:56.254207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.348 [2024-12-14 16:22:56.254225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.348 [2024-12-14 16:22:56.267909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.348 [2024-12-14 16:22:56.267932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.348 [2024-12-14 16:22:56.281530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.281550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.349 [2024-12-14 16:22:56.295745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.295763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.349 [2024-12-14 16:22:56.309928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.309946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.349 [2024-12-14 16:22:56.323173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.323192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.349 [2024-12-14 16:22:56.336726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.336745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.349 [2024-12-14 16:22:56.350922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.350941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.349 [2024-12-14 16:22:56.362029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.362049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.349 [2024-12-14 16:22:56.375972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.375991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.349 [2024-12-14 16:22:56.389573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.389593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.349 [2024-12-14 16:22:56.403167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.403186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.349 [2024-12-14 16:22:56.416819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.416838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.349 [2024-12-14 16:22:56.430485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.349 [2024-12-14 16:22:56.430504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.607 [2024-12-14 16:22:56.444322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.607 [2024-12-14 16:22:56.444341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.607 [2024-12-14 16:22:56.458343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.607 [2024-12-14 16:22:56.458362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.607 [2024-12-14 16:22:56.472140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.607 [2024-12-14 16:22:56.472160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.607 [2024-12-14 16:22:56.485709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.607 [2024-12-14 16:22:56.485730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.607 [2024-12-14 16:22:56.499144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.607 [2024-12-14 16:22:56.499164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.513350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.513370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.527110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.527133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.540938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.540958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.554264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.554282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.567711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.567731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.581486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.581506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.595334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.595354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 17041.33 IOPS, 133.14 MiB/s [2024-12-14T15:22:56.694Z] [2024-12-14 16:22:56.609374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.609394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.620299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.620318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.634846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.634865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.648231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.648250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.661955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.661975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.675587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.675607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.608 [2024-12-14 16:22:56.688931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.608 [2024-12-14 16:22:56.688950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.866 [2024-12-14 16:22:56.703045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.866 [2024-12-14 16:22:56.703064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.866 [2024-12-14 16:22:56.716873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.866 [2024-12-14 16:22:56.716892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.866 [2024-12-14 16:22:56.731009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.866 [2024-12-14 16:22:56.731028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.866 [2024-12-14 16:22:56.744333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.744352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.758225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.758245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.771924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.771943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.785210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.785233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.799525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.799544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.812938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.812957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.827173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.827193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.840867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.840885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.854563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.854598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.868303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.868322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.881839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.881858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.896275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.896293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.911902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.911920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.925735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.925753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.867 [2024-12-14 16:22:56.939728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.867 [2024-12-14 16:22:56.939748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.125 [2024-12-14 16:22:56.953554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.125 [2024-12-14 16:22:56.953580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.125 [2024-12-14 16:22:56.967255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.125 [2024-12-14 16:22:56.967273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:56.980983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:56.981001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:56.994668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:56.994687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.008323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.008342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.022280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.022299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.036653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.036671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.051690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.051708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.065554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.065579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.079153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.079172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.093249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.093268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.107215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.107234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.121050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.121069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.134289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.134308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.148073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.148091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.161683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.161702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.175056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.175075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.188625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.188644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.126 [2024-12-14 16:22:57.201981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.126 [2024-12-14 16:22:57.202000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.384 [2024-12-14 16:22:57.216323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.384 [2024-12-14 16:22:57.216342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.384 [2024-12-14 16:22:57.227629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.384 [2024-12-14 16:22:57.227647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.384 [2024-12-14 16:22:57.241754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.384 [2024-12-14 16:22:57.241773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.384 [2024-12-14 16:22:57.255710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.384 [2024-12-14 16:22:57.255728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.384 [2024-12-14 16:22:57.269049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.384 [2024-12-14 16:22:57.269067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.384 [2024-12-14 16:22:57.282799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.384 [2024-12-14 16:22:57.282818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.384 [2024-12-14 16:22:57.296507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.384 [2024-12-14 16:22:57.296526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.384 [2024-12-14 16:22:57.310176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.384 [2024-12-14 16:22:57.310195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.384 [2024-12-14 16:22:57.323976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.384 [2024-12-14 16:22:57.323995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.385 [2024-12-14 16:22:57.337833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.385 [2024-12-14 16:22:57.337851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.385 [2024-12-14 16:22:57.351704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.385 [2024-12-14 16:22:57.351723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.385 [2024-12-14 16:22:57.365054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.385 [2024-12-14 16:22:57.365072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.385 [2024-12-14 16:22:57.378964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.385 [2024-12-14 16:22:57.378984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.385 [2024-12-14 16:22:57.392786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.385 [2024-12-14 16:22:57.392805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.385 [2024-12-14 16:22:57.406548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.385 [2024-12-14 16:22:57.406572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.385 [2024-12-14 16:22:57.420553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.385 [2024-12-14 16:22:57.420578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.385 [2024-12-14 16:22:57.434053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.385 [2024-12-14 16:22:57.434072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.385 [2024-12-14 16:22:57.447961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.385 [2024-12-14 16:22:57.447980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.385 [2024-12-14 16:22:57.461904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.385 [2024-12-14 16:22:57.461923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.475510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.475528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.489241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.489259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.502979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.502997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.516736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.516755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.530513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.530532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.543736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.543755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.557403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.557420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.570883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.570901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.584393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.584412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.598455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.598476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 17040.25 IOPS, 133.13 MiB/s [2024-12-14T15:22:57.729Z] [2024-12-14 16:22:57.611890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.611910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.625631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.625650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.639626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.639645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.653120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.653139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.666891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.666909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.680962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.680980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.695059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.695078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.708615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.708634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.643 [2024-12-14 16:22:57.722110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.643 [2024-12-14 16:22:57.722129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.902 [2024-12-14 16:22:57.735740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.902 [2024-12-14 16:22:57.735758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.902 [2024-12-14 16:22:57.749962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.902 [2024-12-14 16:22:57.749981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.902 [2024-12-14 16:22:57.763417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.902 [2024-12-14 16:22:57.763437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.902 [2024-12-14 16:22:57.777120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.902 [2024-12-14 16:22:57.777140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.902 [2024-12-14 16:22:57.790870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.902 [2024-12-14 16:22:57.790889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.902 [2024-12-14 16:22:57.804782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.902 [2024-12-14 16:22:57.804801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.818258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.818282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.832237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.832257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.846336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.846356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.860185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.860205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.874393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.874412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.888253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.888273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.901994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.902013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.915863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.915881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.930161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.930179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.944154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.944173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.957888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.957908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.971545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.971572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.903 [2024-12-14 16:22:57.985669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.903 [2024-12-14 16:22:57.985689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.161 [2024-12-14 16:22:57.999273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.161 [2024-12-14 16:22:57.999292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.013040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.013059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.027089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.027110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.040459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.040478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.054319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.054337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.068445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.068464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.082607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.082630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.096114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.096134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.110221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.110241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.123689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.123708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.137536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.137562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.151178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.151197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.164670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.164689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.178859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.178878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.192566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.192586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.206422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.206440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.220380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.220400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.162 [2024-12-14 16:22:58.233886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.162 [2024-12-14 16:22:58.233905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.420 [2024-12-14 16:22:58.247667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.247686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.261507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.261526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.275417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.275435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.288889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.288907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.302410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.302429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.316154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.316174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.329665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.329683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.343580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.343607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.357545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.357570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.371456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.371475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.385180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.385198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.399293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.399311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.413099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.413117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.426915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.426934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.441292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.441311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.452128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.452146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.466342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.466360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.479956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.479975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.421 [2024-12-14 16:22:58.494105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.421 [2024-12-14 16:22:58.494123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.507752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.507772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.521757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.521776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.535543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.535569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.549249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.549268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.562892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.562911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.576437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.576455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.590351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.590370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 17025.00 IOPS, 133.01 MiB/s [2024-12-14T15:22:58.766Z] [2024-12-14 16:22:58.603488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.603507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 00:09:28.680 Latency(us) 00:09:28.680 [2024-12-14T15:22:58.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.680 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:28.680 Nvme1n1 : 5.01 17030.03 133.05 0.00 0.00 7509.17 3557.67 13981.01 00:09:28.680 [2024-12-14T15:22:58.766Z] =================================================================================================================== 00:09:28.680 [2024-12-14T15:22:58.766Z] Total : 17030.03 133.05 0.00 0.00 7509.17 3557.67 13981.01 00:09:28.680 [2024-12-14 16:22:58.612577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.612610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.624624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.624640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.636649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.636669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.648673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.648689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.660705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.660719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.672730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.672744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.684761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.684776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.696791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.696803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.708823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.708835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.720852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.720863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.732908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.732925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.744918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.744930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 [2024-12-14 16:22:58.756947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.680 [2024-12-14 16:22:58.756957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (844716) - No such process 00:09:28.680 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 844716 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.939 delay0 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.939 16:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:28.939 [2024-12-14 16:22:58.864883] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:35.503 Initializing NVMe Controllers 00:09:35.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:35.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:35.503 Initialization complete. Launching workers. 00:09:35.503 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 125 00:09:35.503 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 410, failed to submit 35 00:09:35.503 success 214, unsuccessful 196, failed 0 00:09:35.503 16:23:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:35.503 16:23:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:35.503 16:23:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.503 16:23:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:35.503 16:23:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.503 16:23:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:35.503 16:23:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.503 16:23:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.503 rmmod nvme_tcp 00:09:35.503 rmmod nvme_fabrics 00:09:35.503 rmmod nvme_keyring 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 842914 ']' 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 842914 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 842914 ']' 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 842914 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 842914 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 842914' 00:09:35.503 killing process with pid 842914 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 842914 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 842914 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.503 16:23:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.410 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:37.410 00:09:37.410 real 0m31.278s 00:09:37.410 user 0m41.792s 00:09:37.410 sys 0m11.042s 00:09:37.410 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.410 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.410 ************************************ 00:09:37.410 END TEST nvmf_zcopy 00:09:37.410 ************************************ 00:09:37.410 16:23:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:37.410 16:23:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.410 16:23:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.410 16:23:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.410 ************************************ 00:09:37.410 START TEST nvmf_nmic 00:09:37.410 ************************************ 00:09:37.410 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:37.670 * Looking for test storage... 00:09:37.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.670 --rc genhtml_branch_coverage=1 00:09:37.670 --rc genhtml_function_coverage=1 00:09:37.670 --rc genhtml_legend=1 00:09:37.670 --rc geninfo_all_blocks=1 00:09:37.670 --rc geninfo_unexecuted_blocks=1 00:09:37.670 00:09:37.670 ' 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.670 --rc genhtml_branch_coverage=1 00:09:37.670 --rc genhtml_function_coverage=1 00:09:37.670 --rc genhtml_legend=1 00:09:37.670 --rc geninfo_all_blocks=1 00:09:37.670 --rc geninfo_unexecuted_blocks=1 00:09:37.670 00:09:37.670 ' 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.670 --rc genhtml_branch_coverage=1 00:09:37.670 --rc genhtml_function_coverage=1 00:09:37.670 --rc genhtml_legend=1 00:09:37.670 --rc geninfo_all_blocks=1 00:09:37.670 --rc geninfo_unexecuted_blocks=1 00:09:37.670 00:09:37.670 ' 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.670 --rc genhtml_branch_coverage=1 00:09:37.670 --rc genhtml_function_coverage=1 00:09:37.670 --rc genhtml_legend=1 00:09:37.670 --rc geninfo_all_blocks=1 00:09:37.670 --rc geninfo_unexecuted_blocks=1 00:09:37.670 00:09:37.670 ' 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:37.670 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.671 16:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.244 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:44.245 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:44.245 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:44.245 Found net devices under 0000:af:00.0: cvl_0_0 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:44.245 Found net devices under 0000:af:00.1: cvl_0_1 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:44.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:09:44.245 00:09:44.245 --- 10.0.0.2 ping statistics --- 00:09:44.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.245 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:09:44.245 00:09:44.245 --- 10.0.0.1 ping statistics --- 00:09:44.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.245 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=850197 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 850197 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 850197 ']' 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.245 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.245 [2024-12-14 16:23:13.576721] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:44.245 [2024-12-14 16:23:13.576769] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.245 [2024-12-14 16:23:13.652535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.245 [2024-12-14 16:23:13.676373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.246 [2024-12-14 16:23:13.676413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.246 [2024-12-14 16:23:13.676420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.246 [2024-12-14 16:23:13.676428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.246 [2024-12-14 16:23:13.676433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.246 [2024-12-14 16:23:13.677916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.246 [2024-12-14 16:23:13.677944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.246 [2024-12-14 16:23:13.678051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.246 [2024-12-14 16:23:13.678052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.246 [2024-12-14 16:23:13.818378] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.246 Malloc0 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.246 [2024-12-14 16:23:13.884277] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:44.246 test case1: single bdev can't be used in multiple subsystems 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.246 [2024-12-14 16:23:13.912180] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:44.246 [2024-12-14 16:23:13.912201] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:44.246 [2024-12-14 16:23:13.912209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.246 request: 00:09:44.246 { 00:09:44.246 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:44.246 "namespace": { 00:09:44.246 "bdev_name": "Malloc0", 00:09:44.246 "no_auto_visible": false, 00:09:44.246 "hide_metadata": false 00:09:44.246 }, 00:09:44.246 "method": "nvmf_subsystem_add_ns", 00:09:44.246 "req_id": 1 00:09:44.246 } 00:09:44.246 Got JSON-RPC error response 00:09:44.246 response: 00:09:44.246 { 00:09:44.246 "code": -32602, 00:09:44.246 "message": "Invalid parameters" 00:09:44.246 } 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:44.246 Adding namespace failed - expected result. 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:44.246 test case2: host connect to nvmf target in multiple paths 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.246 [2024-12-14 16:23:13.924300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.246 16:23:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:45.183 16:23:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:46.561 16:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:46.561 16:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:46.561 16:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.561 16:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:46.561 16:23:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:48.465 16:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:48.465 16:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:48.465 16:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.465 16:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:48.465 16:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.465 16:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:48.465 16:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:48.466 [global] 00:09:48.466 thread=1 00:09:48.466 invalidate=1 00:09:48.466 rw=write 00:09:48.466 time_based=1 00:09:48.466 runtime=1 00:09:48.466 ioengine=libaio 00:09:48.466 direct=1 00:09:48.466 bs=4096 00:09:48.466 iodepth=1 00:09:48.466 norandommap=0 00:09:48.466 numjobs=1 00:09:48.466 00:09:48.466 verify_dump=1 00:09:48.466 verify_backlog=512 00:09:48.466 verify_state_save=0 00:09:48.466 do_verify=1 00:09:48.466 verify=crc32c-intel 00:09:48.466 [job0] 00:09:48.466 filename=/dev/nvme0n1 00:09:48.466 Could not set queue depth (nvme0n1) 00:09:48.724 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.724 fio-3.35 00:09:48.724 Starting 1 thread 00:09:50.115 00:09:50.115 job0: (groupid=0, jobs=1): err= 0: pid=851179: Sat Dec 14 16:23:19 2024 00:09:50.115 read: IOPS=22, BW=90.3KiB/s (92.5kB/s)(92.0KiB/1019msec) 00:09:50.115 slat (nsec): min=9590, max=24320, avg=21161.91, stdev=2603.29 00:09:50.115 clat (usec): min=40911, max=41221, avg=40979.92, stdev=66.20 00:09:50.115 lat (usec): min=40933, max=41231, avg=41001.08, stdev=64.41 00:09:50.115 clat percentiles (usec): 00:09:50.115 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:50.115 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:50.115 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:50.115 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:50.115 | 99.99th=[41157] 00:09:50.115 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:09:50.115 slat (nsec): min=8982, max=37161, avg=10735.77, stdev=1834.24 00:09:50.115 clat (usec): min=118, max=300, avg=133.48, stdev=13.11 00:09:50.115 lat (usec): min=129, max=337, avg=144.21, stdev=14.00 00:09:50.115 clat percentiles (usec): 00:09:50.115 | 1.00th=[ 122], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 127], 00:09:50.115 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 133], 00:09:50.115 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 161], 00:09:50.115 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 302], 99.95th=[ 302], 00:09:50.115 | 99.99th=[ 302] 00:09:50.115 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.115 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.115 lat (usec) : 250=95.51%, 500=0.19% 00:09:50.115 lat (msec) : 50=4.30% 00:09:50.115 cpu : usr=0.10%, sys=1.08%, ctx=535, majf=0, minf=1 00:09:50.115 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.115 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.115 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.115 00:09:50.115 Run status group 0 (all jobs): 00:09:50.115 READ: bw=90.3KiB/s (92.5kB/s), 90.3KiB/s-90.3KiB/s (92.5kB/s-92.5kB/s), io=92.0KiB (94.2kB), run=1019-1019msec 00:09:50.115 WRITE: bw=2010KiB/s (2058kB/s), 2010KiB/s-2010KiB/s (2058kB/s-2058kB/s), io=2048KiB (2097kB), run=1019-1019msec 00:09:50.115 00:09:50.115 Disk stats (read/write): 00:09:50.115 nvme0n1: ios=70/512, merge=0/0, ticks=845/66, in_queue=911, util=91.28% 00:09:50.115 16:23:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:50.115 16:23:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:50.115 16:23:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:50.115 16:23:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:50.115 16:23:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.115 rmmod nvme_tcp 00:09:50.115 rmmod nvme_fabrics 00:09:50.115 rmmod nvme_keyring 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 850197 ']' 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 850197 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 850197 ']' 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 850197 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 850197 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 850197' 00:09:50.115 killing process with pid 850197 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 850197 00:09:50.115 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 850197 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.375 16:23:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.912 00:09:52.912 real 0m15.006s 00:09:52.912 user 0m34.184s 00:09:52.912 sys 0m5.125s 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.912 ************************************ 00:09:52.912 END TEST nvmf_nmic 00:09:52.912 ************************************ 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.912 ************************************ 00:09:52.912 START TEST nvmf_fio_target 00:09:52.912 ************************************ 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:52.912 * Looking for test storage... 00:09:52.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.912 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:52.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.913 --rc genhtml_branch_coverage=1 00:09:52.913 --rc genhtml_function_coverage=1 00:09:52.913 --rc genhtml_legend=1 00:09:52.913 --rc geninfo_all_blocks=1 00:09:52.913 --rc geninfo_unexecuted_blocks=1 00:09:52.913 00:09:52.913 ' 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:52.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.913 --rc genhtml_branch_coverage=1 00:09:52.913 --rc genhtml_function_coverage=1 00:09:52.913 --rc genhtml_legend=1 00:09:52.913 --rc geninfo_all_blocks=1 00:09:52.913 --rc geninfo_unexecuted_blocks=1 00:09:52.913 00:09:52.913 ' 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:52.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.913 --rc genhtml_branch_coverage=1 00:09:52.913 --rc genhtml_function_coverage=1 00:09:52.913 --rc genhtml_legend=1 00:09:52.913 --rc geninfo_all_blocks=1 00:09:52.913 --rc geninfo_unexecuted_blocks=1 00:09:52.913 00:09:52.913 ' 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:52.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.913 --rc genhtml_branch_coverage=1 00:09:52.913 --rc genhtml_function_coverage=1 00:09:52.913 --rc genhtml_legend=1 00:09:52.913 --rc geninfo_all_blocks=1 00:09:52.913 --rc geninfo_unexecuted_blocks=1 00:09:52.913 00:09:52.913 ' 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.913 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.914 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.914 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.914 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.914 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:52.914 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:52.914 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.914 16:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:59.488 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:59.488 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.488 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:59.489 Found net devices under 0000:af:00.0: cvl_0_0 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:59.489 Found net devices under 0000:af:00.1: cvl_0_1 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:09:59.489 00:09:59.489 --- 10.0.0.2 ping statistics --- 00:09:59.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.489 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:09:59.489 00:09:59.489 --- 10.0.0.1 ping statistics --- 00:09:59.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.489 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=854950 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 854950 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 854950 ']' 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.489 [2024-12-14 16:23:28.684307] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:59.489 [2024-12-14 16:23:28.684349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.489 [2024-12-14 16:23:28.763746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.489 [2024-12-14 16:23:28.786516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.489 [2024-12-14 16:23:28.786551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.489 [2024-12-14 16:23:28.786561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.489 [2024-12-14 16:23:28.786567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.489 [2024-12-14 16:23:28.786572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.489 [2024-12-14 16:23:28.787996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.489 [2024-12-14 16:23:28.788106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.489 [2024-12-14 16:23:28.788213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.489 [2024-12-14 16:23:28.788215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.489 16:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:59.489 [2024-12-14 16:23:29.088781] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.489 16:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:59.489 16:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:59.489 16:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:59.489 16:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:59.489 16:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:59.749 16:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:59.749 16:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.008 16:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:00.008 16:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:00.267 16:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.526 16:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:00.526 16:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.786 16:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:00.786 16:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.786 16:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:00.786 16:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:01.045 16:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:01.304 16:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:01.304 16:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.563 16:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:01.563 16:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:01.823 16:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.823 [2024-12-14 16:23:31.822474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.823 16:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:02.082 16:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:02.341 16:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.720 16:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:03.720 16:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:03.720 16:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:03.720 16:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:03.720 16:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:03.720 16:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:05.651 16:23:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:05.651 16:23:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:05.651 16:23:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:05.651 16:23:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:05.651 16:23:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:05.651 16:23:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:05.651 16:23:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:05.651 [global] 00:10:05.651 thread=1 00:10:05.651 invalidate=1 00:10:05.651 rw=write 00:10:05.651 time_based=1 00:10:05.651 runtime=1 00:10:05.651 ioengine=libaio 00:10:05.651 direct=1 00:10:05.651 bs=4096 00:10:05.651 iodepth=1 00:10:05.651 norandommap=0 00:10:05.651 numjobs=1 00:10:05.651 00:10:05.651 verify_dump=1 00:10:05.651 verify_backlog=512 00:10:05.651 verify_state_save=0 00:10:05.651 do_verify=1 00:10:05.651 verify=crc32c-intel 00:10:05.651 [job0] 00:10:05.651 filename=/dev/nvme0n1 00:10:05.651 [job1] 00:10:05.651 filename=/dev/nvme0n2 00:10:05.651 [job2] 00:10:05.651 filename=/dev/nvme0n3 00:10:05.651 [job3] 00:10:05.651 filename=/dev/nvme0n4 00:10:05.651 Could not set queue depth (nvme0n1) 00:10:05.651 Could not set queue depth (nvme0n2) 00:10:05.651 Could not set queue depth (nvme0n3) 00:10:05.651 Could not set queue depth (nvme0n4) 00:10:05.913 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.913 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.913 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.913 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.913 fio-3.35 00:10:05.913 Starting 4 threads 00:10:07.283 00:10:07.283 job0: (groupid=0, jobs=1): err= 0: pid=856273: Sat Dec 14 16:23:36 2024 00:10:07.283 read: IOPS=96, BW=385KiB/s (394kB/s)(392KiB/1019msec) 00:10:07.283 slat (nsec): min=6672, max=24911, avg=11102.11, stdev=6478.11 00:10:07.283 clat (usec): min=189, max=42000, avg=9462.99, stdev=17198.31 00:10:07.283 lat (usec): min=197, max=42022, avg=9474.10, stdev=17202.97 00:10:07.283 clat percentiles (usec): 00:10:07.283 | 1.00th=[ 190], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 235], 00:10:07.283 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 269], 60.00th=[ 281], 00:10:07.283 | 70.00th=[ 306], 80.00th=[40633], 90.00th=[41157], 95.00th=[42206], 00:10:07.283 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:07.283 | 99.99th=[42206] 00:10:07.283 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:10:07.283 slat (nsec): min=9189, max=39518, avg=10365.95, stdev=1724.23 00:10:07.283 clat (usec): min=135, max=297, avg=162.22, stdev=16.31 00:10:07.283 lat (usec): min=145, max=337, avg=172.58, stdev=16.92 00:10:07.283 clat percentiles (usec): 00:10:07.283 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:10:07.283 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:10:07.283 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 190], 00:10:07.283 | 99.00th=[ 212], 99.50th=[ 251], 99.90th=[ 297], 99.95th=[ 297], 00:10:07.283 | 99.99th=[ 297] 00:10:07.283 bw ( KiB/s): min= 4096, max= 4096, per=33.97%, avg=4096.00, stdev= 0.00, samples=1 00:10:07.283 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:07.283 lat (usec) : 250=89.51%, 500=6.89% 00:10:07.283 lat (msec) : 50=3.61% 00:10:07.283 cpu : usr=0.29%, sys=0.59%, ctx=611, majf=0, minf=2 00:10:07.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.283 issued rwts: total=98,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.283 job1: (groupid=0, jobs=1): err= 0: pid=856275: Sat Dec 14 16:23:36 2024 00:10:07.283 read: IOPS=56, BW=224KiB/s (230kB/s)(228KiB/1016msec) 00:10:07.283 slat (nsec): min=7056, max=27031, avg=13473.25, stdev=6743.14 00:10:07.283 clat (usec): min=225, max=41287, avg=15985.53, stdev=19984.51 00:10:07.283 lat (usec): min=234, max=41297, avg=15999.00, stdev=19988.38 00:10:07.283 clat percentiles (usec): 00:10:07.283 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 265], 20.00th=[ 273], 00:10:07.283 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 302], 60.00th=[ 347], 00:10:07.283 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:07.283 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:07.283 | 99.99th=[41157] 00:10:07.283 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:10:07.283 slat (nsec): min=9844, max=47069, avg=11744.80, stdev=2737.22 00:10:07.283 clat (usec): min=129, max=331, avg=186.85, stdev=22.92 00:10:07.283 lat (usec): min=140, max=345, avg=198.59, stdev=23.21 00:10:07.283 clat percentiles (usec): 00:10:07.283 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 174], 00:10:07.283 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:10:07.283 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 219], 00:10:07.283 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 334], 99.95th=[ 334], 00:10:07.283 | 99.99th=[ 334] 00:10:07.283 bw ( KiB/s): min= 4096, max= 4096, per=33.97%, avg=4096.00, stdev= 0.00, samples=1 00:10:07.283 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:07.283 lat (usec) : 250=90.16%, 500=5.98% 00:10:07.283 lat (msec) : 50=3.87% 00:10:07.283 cpu : usr=0.89%, sys=0.49%, ctx=569, majf=0, minf=1 00:10:07.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.283 issued rwts: total=57,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.283 job2: (groupid=0, jobs=1): err= 0: pid=856276: Sat Dec 14 16:23:36 2024 00:10:07.283 read: IOPS=420, BW=1682KiB/s (1722kB/s)(1700KiB/1011msec) 00:10:07.283 slat (nsec): min=6400, max=25336, avg=8332.98, stdev=3475.47 00:10:07.283 clat (usec): min=177, max=42234, avg=2122.66, stdev=8613.64 00:10:07.283 lat (usec): min=184, max=42244, avg=2130.99, stdev=8615.65 00:10:07.283 clat percentiles (usec): 00:10:07.283 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:10:07.283 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:10:07.283 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 281], 95.00th=[ 322], 00:10:07.283 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:07.283 | 99.99th=[42206] 00:10:07.283 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:10:07.283 slat (nsec): min=9616, max=32195, avg=10984.41, stdev=1833.46 00:10:07.283 clat (usec): min=130, max=332, avg=188.35, stdev=23.18 00:10:07.283 lat (usec): min=141, max=363, avg=199.34, stdev=23.54 00:10:07.283 clat percentiles (usec): 00:10:07.283 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 176], 00:10:07.283 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:10:07.283 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 221], 00:10:07.283 | 99.00th=[ 233], 99.50th=[ 269], 99.90th=[ 334], 99.95th=[ 334], 00:10:07.283 | 99.99th=[ 334] 00:10:07.283 bw ( KiB/s): min= 4096, max= 4096, per=33.97%, avg=4096.00, stdev= 0.00, samples=1 00:10:07.283 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:07.283 lat (usec) : 250=93.28%, 500=4.59% 00:10:07.283 lat (msec) : 50=2.13% 00:10:07.283 cpu : usr=0.20%, sys=1.19%, ctx=940, majf=0, minf=1 00:10:07.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.283 issued rwts: total=425,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.283 job3: (groupid=0, jobs=1): err= 0: pid=856277: Sat Dec 14 16:23:36 2024 00:10:07.283 read: IOPS=1241, BW=4967KiB/s (5086kB/s)(4972KiB/1001msec) 00:10:07.283 slat (nsec): min=6817, max=27764, avg=7766.71, stdev=1749.22 00:10:07.283 clat (usec): min=174, max=42254, avg=595.89, stdev=3876.25 00:10:07.283 lat (usec): min=181, max=42264, avg=603.66, stdev=3877.58 00:10:07.283 clat percentiles (usec): 00:10:07.283 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:10:07.283 | 30.00th=[ 212], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 243], 00:10:07.283 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:10:07.283 | 99.00th=[ 326], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:07.283 | 99.99th=[42206] 00:10:07.283 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:07.283 slat (usec): min=9, max=894, avg=11.69, stdev=22.59 00:10:07.283 clat (usec): min=112, max=923, avg=146.88, stdev=31.29 00:10:07.283 lat (usec): min=123, max=1074, avg=158.57, stdev=39.30 00:10:07.283 clat percentiles (usec): 00:10:07.283 | 1.00th=[ 121], 5.00th=[ 126], 10.00th=[ 128], 20.00th=[ 131], 00:10:07.283 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:10:07.284 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 176], 00:10:07.284 | 99.00th=[ 210], 99.50th=[ 233], 99.90th=[ 660], 99.95th=[ 922], 00:10:07.284 | 99.99th=[ 922] 00:10:07.284 bw ( KiB/s): min= 4096, max= 4096, per=33.97%, avg=4096.00, stdev= 0.00, samples=1 00:10:07.284 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:07.284 lat (usec) : 250=88.38%, 500=11.16%, 750=0.04%, 1000=0.04% 00:10:07.284 lat (msec) : 50=0.40% 00:10:07.284 cpu : usr=2.00%, sys=2.10%, ctx=2781, majf=0, minf=1 00:10:07.284 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.284 issued rwts: total=1243,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.284 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.284 00:10:07.284 Run status group 0 (all jobs): 00:10:07.284 READ: bw=7156KiB/s (7328kB/s), 224KiB/s-4967KiB/s (230kB/s-5086kB/s), io=7292KiB (7467kB), run=1001-1019msec 00:10:07.284 WRITE: bw=11.8MiB/s (12.3MB/s), 2010KiB/s-6138KiB/s (2058kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1019msec 00:10:07.284 00:10:07.284 Disk stats (read/write): 00:10:07.284 nvme0n1: ios=143/512, merge=0/0, ticks=740/76, in_queue=816, util=86.67% 00:10:07.284 nvme0n2: ios=103/512, merge=0/0, ticks=801/95, in_queue=896, util=90.64% 00:10:07.284 nvme0n3: ios=435/512, merge=0/0, ticks=1607/99, in_queue=1706, util=93.54% 00:10:07.284 nvme0n4: ios=1025/1024, merge=0/0, ticks=791/153, in_queue=944, util=94.23% 00:10:07.284 16:23:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:07.284 [global] 00:10:07.284 thread=1 00:10:07.284 invalidate=1 00:10:07.284 rw=randwrite 00:10:07.284 time_based=1 00:10:07.284 runtime=1 00:10:07.284 ioengine=libaio 00:10:07.284 direct=1 00:10:07.284 bs=4096 00:10:07.284 iodepth=1 00:10:07.284 norandommap=0 00:10:07.284 numjobs=1 00:10:07.284 00:10:07.284 verify_dump=1 00:10:07.284 verify_backlog=512 00:10:07.284 verify_state_save=0 00:10:07.284 do_verify=1 00:10:07.284 verify=crc32c-intel 00:10:07.284 [job0] 00:10:07.284 filename=/dev/nvme0n1 00:10:07.284 [job1] 00:10:07.284 filename=/dev/nvme0n2 00:10:07.284 [job2] 00:10:07.284 filename=/dev/nvme0n3 00:10:07.284 [job3] 00:10:07.284 filename=/dev/nvme0n4 00:10:07.284 Could not set queue depth (nvme0n1) 00:10:07.284 Could not set queue depth (nvme0n2) 00:10:07.284 Could not set queue depth (nvme0n3) 00:10:07.284 Could not set queue depth (nvme0n4) 00:10:07.284 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.284 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.284 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.284 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.284 fio-3.35 00:10:07.284 Starting 4 threads 00:10:08.654 00:10:08.654 job0: (groupid=0, jobs=1): err= 0: pid=856637: Sat Dec 14 16:23:38 2024 00:10:08.654 read: IOPS=1502, BW=6010KiB/s (6154kB/s)(6184KiB/1029msec) 00:10:08.654 slat (nsec): min=7117, max=28134, avg=9551.50, stdev=1754.40 00:10:08.654 clat (usec): min=182, max=42289, avg=396.56, stdev=1836.16 00:10:08.654 lat (usec): min=194, max=42299, avg=406.11, stdev=1836.16 00:10:08.654 clat percentiles (usec): 00:10:08.654 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 233], 00:10:08.654 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 289], 00:10:08.654 | 70.00th=[ 330], 80.00th=[ 433], 90.00th=[ 494], 95.00th=[ 510], 00:10:08.654 | 99.00th=[ 545], 99.50th=[ 635], 99.90th=[42206], 99.95th=[42206], 00:10:08.654 | 99.99th=[42206] 00:10:08.654 write: IOPS=1990, BW=7961KiB/s (8152kB/s)(8192KiB/1029msec); 0 zone resets 00:10:08.654 slat (nsec): min=9402, max=44347, avg=12882.00, stdev=2504.18 00:10:08.654 clat (usec): min=112, max=349, avg=177.23, stdev=35.23 00:10:08.654 lat (usec): min=123, max=363, avg=190.11, stdev=35.71 00:10:08.654 clat percentiles (usec): 00:10:08.654 | 1.00th=[ 124], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 151], 00:10:08.654 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 178], 00:10:08.654 | 70.00th=[ 188], 80.00th=[ 202], 90.00th=[ 223], 95.00th=[ 241], 00:10:08.654 | 99.00th=[ 306], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 347], 00:10:08.654 | 99.99th=[ 351] 00:10:08.654 bw ( KiB/s): min= 8175, max= 8192, per=25.95%, avg=8183.50, stdev=12.02, samples=2 00:10:08.654 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:10:08.654 lat (usec) : 250=68.34%, 500=28.24%, 750=3.26%, 1000=0.06% 00:10:08.654 lat (msec) : 2=0.03%, 50=0.08% 00:10:08.654 cpu : usr=2.53%, sys=6.42%, ctx=3596, majf=0, minf=1 00:10:08.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.654 issued rwts: total=1546,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.654 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.654 job1: (groupid=0, jobs=1): err= 0: pid=856638: Sat Dec 14 16:23:38 2024 00:10:08.654 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:08.654 slat (nsec): min=3216, max=40651, avg=7675.15, stdev=1884.86 00:10:08.654 clat (usec): min=188, max=879, avg=269.06, stdev=47.76 00:10:08.654 lat (usec): min=195, max=894, avg=276.74, stdev=47.98 00:10:08.654 clat percentiles (usec): 00:10:08.654 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 235], 00:10:08.655 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:10:08.655 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 326], 95.00th=[ 351], 00:10:08.655 | 99.00th=[ 433], 99.50th=[ 474], 99.90th=[ 635], 99.95th=[ 644], 00:10:08.655 | 99.99th=[ 881] 00:10:08.655 write: IOPS=2300, BW=9203KiB/s (9424kB/s)(9212KiB/1001msec); 0 zone resets 00:10:08.655 slat (nsec): min=5603, max=48081, avg=10649.22, stdev=2913.56 00:10:08.655 clat (usec): min=115, max=363, avg=173.01, stdev=27.97 00:10:08.655 lat (usec): min=130, max=374, avg=183.66, stdev=28.24 00:10:08.655 clat percentiles (usec): 00:10:08.655 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:10:08.655 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 176], 00:10:08.655 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 212], 95.00th=[ 223], 00:10:08.655 | 99.00th=[ 253], 99.50th=[ 285], 99.90th=[ 343], 99.95th=[ 347], 00:10:08.655 | 99.99th=[ 363] 00:10:08.655 bw ( KiB/s): min= 8694, max= 8694, per=27.57%, avg=8694.00, stdev= 0.00, samples=1 00:10:08.655 iops : min= 2173, max= 2173, avg=2173.00, stdev= 0.00, samples=1 00:10:08.655 lat (usec) : 250=68.51%, 500=31.35%, 750=0.11%, 1000=0.02% 00:10:08.655 cpu : usr=1.60%, sys=4.70%, ctx=4354, majf=0, minf=1 00:10:08.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.655 issued rwts: total=2048,2303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.655 job2: (groupid=0, jobs=1): err= 0: pid=856639: Sat Dec 14 16:23:38 2024 00:10:08.655 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:08.655 slat (nsec): min=6670, max=29095, avg=8395.92, stdev=1437.05 00:10:08.655 clat (usec): min=192, max=479, avg=265.91, stdev=33.47 00:10:08.655 lat (usec): min=201, max=488, avg=274.31, stdev=33.48 00:10:08.655 clat percentiles (usec): 00:10:08.655 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 237], 00:10:08.655 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:10:08.655 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:10:08.655 | 99.00th=[ 351], 99.50th=[ 379], 99.90th=[ 424], 99.95th=[ 478], 00:10:08.655 | 99.99th=[ 478] 00:10:08.655 write: IOPS=2269, BW=9079KiB/s (9297kB/s)(9088KiB/1001msec); 0 zone resets 00:10:08.655 slat (nsec): min=9161, max=46127, avg=11695.67, stdev=2370.72 00:10:08.655 clat (usec): min=126, max=301, avg=176.07, stdev=23.95 00:10:08.655 lat (usec): min=136, max=339, avg=187.77, stdev=23.86 00:10:08.655 clat percentiles (usec): 00:10:08.655 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:10:08.655 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:10:08.655 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 212], 95.00th=[ 223], 00:10:08.655 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 265], 99.95th=[ 277], 00:10:08.655 | 99.99th=[ 302] 00:10:08.655 bw ( KiB/s): min= 8720, max= 8720, per=27.65%, avg=8720.00, stdev= 0.00, samples=1 00:10:08.655 iops : min= 2180, max= 2180, avg=2180.00, stdev= 0.00, samples=1 00:10:08.655 lat (usec) : 250=67.75%, 500=32.25% 00:10:08.655 cpu : usr=2.70%, sys=4.30%, ctx=4320, majf=0, minf=2 00:10:08.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.655 issued rwts: total=2048,2272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.655 job3: (groupid=0, jobs=1): err= 0: pid=856640: Sat Dec 14 16:23:38 2024 00:10:08.655 read: IOPS=1005, BW=4023KiB/s (4120kB/s)(4164KiB/1035msec) 00:10:08.655 slat (nsec): min=6756, max=23945, avg=8063.93, stdev=1586.48 00:10:08.655 clat (usec): min=184, max=42154, avg=672.99, stdev=4007.32 00:10:08.655 lat (usec): min=192, max=42164, avg=681.05, stdev=4007.87 00:10:08.655 clat percentiles (usec): 00:10:08.655 | 1.00th=[ 200], 5.00th=[ 217], 10.00th=[ 227], 20.00th=[ 241], 00:10:08.655 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:10:08.655 | 70.00th=[ 289], 80.00th=[ 310], 90.00th=[ 351], 95.00th=[ 379], 00:10:08.655 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:08.655 | 99.99th=[42206] 00:10:08.655 write: IOPS=1484, BW=5936KiB/s (6079kB/s)(6144KiB/1035msec); 0 zone resets 00:10:08.655 slat (nsec): min=9481, max=48518, avg=10900.84, stdev=2117.90 00:10:08.655 clat (usec): min=127, max=333, avg=197.24, stdev=29.89 00:10:08.655 lat (usec): min=138, max=373, avg=208.14, stdev=30.14 00:10:08.655 clat percentiles (usec): 00:10:08.655 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 172], 00:10:08.655 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 206], 00:10:08.655 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 245], 00:10:08.655 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 326], 99.95th=[ 334], 00:10:08.655 | 99.99th=[ 334] 00:10:08.655 bw ( KiB/s): min= 4096, max= 8175, per=19.46%, avg=6135.50, stdev=2884.29, samples=2 00:10:08.655 iops : min= 1024, max= 2043, avg=1533.50, stdev=720.54, samples=2 00:10:08.655 lat (usec) : 250=68.02%, 500=31.35%, 750=0.23% 00:10:08.655 lat (msec) : 50=0.39% 00:10:08.655 cpu : usr=1.26%, sys=2.42%, ctx=2578, majf=0, minf=1 00:10:08.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.655 issued rwts: total=1041,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.655 00:10:08.655 Run status group 0 (all jobs): 00:10:08.655 READ: bw=25.2MiB/s (26.4MB/s), 4023KiB/s-8184KiB/s (4120kB/s-8380kB/s), io=26.1MiB (27.4MB), run=1001-1035msec 00:10:08.655 WRITE: bw=30.8MiB/s (32.3MB/s), 5936KiB/s-9203KiB/s (6079kB/s-9424kB/s), io=31.9MiB (33.4MB), run=1001-1035msec 00:10:08.655 00:10:08.655 Disk stats (read/write): 00:10:08.655 nvme0n1: ios=1587/1779, merge=0/0, ticks=1371/288, in_queue=1659, util=90.18% 00:10:08.655 nvme0n2: ios=1629/2048, merge=0/0, ticks=656/355, in_queue=1011, util=96.45% 00:10:08.655 nvme0n3: ios=1694/2048, merge=0/0, ticks=495/352, in_queue=847, util=91.16% 00:10:08.655 nvme0n4: ios=1089/1536, merge=0/0, ticks=1453/292, in_queue=1745, util=98.74% 00:10:08.655 16:23:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:08.655 [global] 00:10:08.655 thread=1 00:10:08.655 invalidate=1 00:10:08.655 rw=write 00:10:08.655 time_based=1 00:10:08.655 runtime=1 00:10:08.655 ioengine=libaio 00:10:08.655 direct=1 00:10:08.655 bs=4096 00:10:08.655 iodepth=128 00:10:08.655 norandommap=0 00:10:08.655 numjobs=1 00:10:08.655 00:10:08.655 verify_dump=1 00:10:08.655 verify_backlog=512 00:10:08.655 verify_state_save=0 00:10:08.655 do_verify=1 00:10:08.655 verify=crc32c-intel 00:10:08.655 [job0] 00:10:08.655 filename=/dev/nvme0n1 00:10:08.655 [job1] 00:10:08.655 filename=/dev/nvme0n2 00:10:08.655 [job2] 00:10:08.655 filename=/dev/nvme0n3 00:10:08.655 [job3] 00:10:08.655 filename=/dev/nvme0n4 00:10:08.655 Could not set queue depth (nvme0n1) 00:10:08.655 Could not set queue depth (nvme0n2) 00:10:08.655 Could not set queue depth (nvme0n3) 00:10:08.655 Could not set queue depth (nvme0n4) 00:10:08.912 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.912 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.912 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.912 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.912 fio-3.35 00:10:08.912 Starting 4 threads 00:10:10.283 00:10:10.283 job0: (groupid=0, jobs=1): err= 0: pid=857011: Sat Dec 14 16:23:40 2024 00:10:10.283 read: IOPS=5655, BW=22.1MiB/s (23.2MB/s)(22.3MiB/1011msec) 00:10:10.283 slat (nsec): min=1269, max=10103k, avg=87064.72, stdev=634729.78 00:10:10.283 clat (usec): min=3428, max=37499, avg=10852.74, stdev=3099.33 00:10:10.283 lat (usec): min=3433, max=37507, avg=10939.80, stdev=3151.55 00:10:10.283 clat percentiles (usec): 00:10:10.283 | 1.00th=[ 4817], 5.00th=[ 7701], 10.00th=[ 8225], 20.00th=[ 9503], 00:10:10.283 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10421], 00:10:10.283 | 70.00th=[10814], 80.00th=[11863], 90.00th=[14746], 95.00th=[16909], 00:10:10.283 | 99.00th=[19530], 99.50th=[28443], 99.90th=[34341], 99.95th=[37487], 00:10:10.283 | 99.99th=[37487] 00:10:10.283 write: IOPS=6077, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1011msec); 0 zone resets 00:10:10.283 slat (usec): min=2, max=9016, avg=74.49, stdev=420.16 00:10:10.283 clat (usec): min=2151, max=45861, avg=10782.65, stdev=6114.11 00:10:10.283 lat (usec): min=2159, max=45869, avg=10857.14, stdev=6151.76 00:10:10.283 clat percentiles (usec): 00:10:10.283 | 1.00th=[ 3032], 5.00th=[ 4686], 10.00th=[ 6390], 20.00th=[ 7898], 00:10:10.283 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:10:10.283 | 70.00th=[10421], 80.00th=[10552], 90.00th=[12518], 95.00th=[25035], 00:10:10.283 | 99.00th=[39060], 99.50th=[40109], 99.90th=[43779], 99.95th=[45876], 00:10:10.283 | 99.99th=[45876] 00:10:10.283 bw ( KiB/s): min=24240, max=24576, per=33.23%, avg=24408.00, stdev=237.59, samples=2 00:10:10.283 iops : min= 6060, max= 6144, avg=6102.00, stdev=59.40, samples=2 00:10:10.283 lat (msec) : 4=1.84%, 10=47.37%, 20=46.80%, 50=4.00% 00:10:10.283 cpu : usr=4.65%, sys=5.74%, ctx=666, majf=0, minf=1 00:10:10.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:10.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.283 issued rwts: total=5718,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.283 job1: (groupid=0, jobs=1): err= 0: pid=857012: Sat Dec 14 16:23:40 2024 00:10:10.283 read: IOPS=5687, BW=22.2MiB/s (23.3MB/s)(22.4MiB/1010msec) 00:10:10.283 slat (nsec): min=1438, max=9799.8k, avg=90367.28, stdev=648107.61 00:10:10.283 clat (usec): min=3643, max=20725, avg=11131.82, stdev=2708.49 00:10:10.283 lat (usec): min=3649, max=20757, avg=11222.18, stdev=2749.34 00:10:10.283 clat percentiles (usec): 00:10:10.283 | 1.00th=[ 4490], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 9634], 00:10:10.283 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:10:10.283 | 70.00th=[11076], 80.00th=[12911], 90.00th=[15533], 95.00th=[16909], 00:10:10.283 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19792], 99.95th=[20317], 00:10:10.283 | 99.99th=[20841] 00:10:10.283 write: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec); 0 zone resets 00:10:10.283 slat (usec): min=2, max=40679, avg=71.52, stdev=607.97 00:10:10.283 clat (usec): min=638, max=57385, avg=9752.80, stdev=4751.19 00:10:10.283 lat (usec): min=652, max=57395, avg=9824.32, stdev=4808.44 00:10:10.283 clat percentiles (usec): 00:10:10.283 | 1.00th=[ 3261], 5.00th=[ 4178], 10.00th=[ 5538], 20.00th=[ 7832], 00:10:10.283 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:10:10.283 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10552], 95.00th=[10945], 00:10:10.283 | 99.00th=[41681], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:10:10.283 | 99.99th=[57410] 00:10:10.283 bw ( KiB/s): min=23664, max=25368, per=33.38%, avg=24516.00, stdev=1204.91, samples=2 00:10:10.283 iops : min= 5916, max= 6342, avg=6129.00, stdev=301.23, samples=2 00:10:10.283 lat (usec) : 750=0.03% 00:10:10.283 lat (msec) : 4=2.25%, 10=35.00%, 20=62.15%, 50=0.29%, 100=0.28% 00:10:10.283 cpu : usr=3.77%, sys=6.05%, ctx=784, majf=0, minf=1 00:10:10.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:10.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.283 issued rwts: total=5744,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.283 job2: (groupid=0, jobs=1): err= 0: pid=857014: Sat Dec 14 16:23:40 2024 00:10:10.283 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:10:10.283 slat (nsec): min=1426, max=12632k, avg=127698.71, stdev=904625.37 00:10:10.283 clat (usec): min=4548, max=33495, avg=14590.22, stdev=5133.35 00:10:10.283 lat (usec): min=4556, max=33498, avg=14717.92, stdev=5201.76 00:10:10.283 clat percentiles (usec): 00:10:10.283 | 1.00th=[ 6063], 5.00th=[10683], 10.00th=[10683], 20.00th=[11469], 00:10:10.283 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:10:10.283 | 70.00th=[13829], 80.00th=[15270], 90.00th=[22676], 95.00th=[27919], 00:10:10.283 | 99.00th=[32113], 99.50th=[32637], 99.90th=[33424], 99.95th=[33424], 00:10:10.283 | 99.99th=[33424] 00:10:10.283 write: IOPS=3170, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1011msec); 0 zone resets 00:10:10.283 slat (usec): min=2, max=10910, avg=183.83, stdev=900.45 00:10:10.283 clat (usec): min=3012, max=63823, avg=25944.68, stdev=17529.28 00:10:10.283 lat (usec): min=3023, max=63841, avg=26128.51, stdev=17628.09 00:10:10.283 clat percentiles (usec): 00:10:10.283 | 1.00th=[ 3589], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 9110], 00:10:10.283 | 30.00th=[13566], 40.00th=[21103], 50.00th=[22676], 60.00th=[22676], 00:10:10.283 | 70.00th=[24249], 80.00th=[41681], 90.00th=[59507], 95.00th=[63177], 00:10:10.283 | 99.00th=[63701], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:10:10.283 | 99.99th=[63701] 00:10:10.283 bw ( KiB/s): min=11016, max=13680, per=16.81%, avg=12348.00, stdev=1883.73, samples=2 00:10:10.283 iops : min= 2754, max= 3420, avg=3087.00, stdev=470.93, samples=2 00:10:10.283 lat (msec) : 4=0.72%, 10=12.31%, 20=48.78%, 50=29.95%, 100=8.24% 00:10:10.283 cpu : usr=2.38%, sys=3.37%, ctx=376, majf=0, minf=1 00:10:10.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:10.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.283 issued rwts: total=3072,3205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.283 job3: (groupid=0, jobs=1): err= 0: pid=857015: Sat Dec 14 16:23:40 2024 00:10:10.283 read: IOPS=2823, BW=11.0MiB/s (11.6MB/s)(11.2MiB/1011msec) 00:10:10.284 slat (nsec): min=1157, max=12241k, avg=135558.36, stdev=881058.09 00:10:10.284 clat (usec): min=5974, max=33271, avg=15696.42, stdev=5301.27 00:10:10.284 lat (usec): min=5980, max=33282, avg=15831.98, stdev=5357.19 00:10:10.284 clat percentiles (usec): 00:10:10.284 | 1.00th=[ 6194], 5.00th=[11207], 10.00th=[12387], 20.00th=[12780], 00:10:10.284 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:10:10.284 | 70.00th=[16188], 80.00th=[19530], 90.00th=[24773], 95.00th=[27657], 00:10:10.284 | 99.00th=[31851], 99.50th=[32900], 99.90th=[33162], 99.95th=[33162], 00:10:10.284 | 99.99th=[33162] 00:10:10.284 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:10:10.284 slat (usec): min=2, max=11391, avg=189.80, stdev=959.10 00:10:10.284 clat (usec): min=1464, max=85665, avg=27157.73, stdev=18548.22 00:10:10.284 lat (usec): min=1483, max=85676, avg=27347.54, stdev=18646.30 00:10:10.284 clat percentiles (usec): 00:10:10.284 | 1.00th=[ 1991], 5.00th=[ 4490], 10.00th=[ 6718], 20.00th=[10945], 00:10:10.284 | 30.00th=[16450], 40.00th=[21365], 50.00th=[22676], 60.00th=[22938], 00:10:10.284 | 70.00th=[31327], 80.00th=[43254], 90.00th=[59507], 95.00th=[63177], 00:10:10.284 | 99.00th=[81265], 99.50th=[83362], 99.90th=[85459], 99.95th=[85459], 00:10:10.284 | 99.99th=[85459] 00:10:10.284 bw ( KiB/s): min=12288, max=12288, per=16.73%, avg=12288.00, stdev= 0.00, samples=2 00:10:10.284 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:10.284 lat (msec) : 2=0.56%, 4=1.74%, 10=8.27%, 20=47.01%, 50=33.88% 00:10:10.284 lat (msec) : 100=8.55% 00:10:10.284 cpu : usr=1.88%, sys=3.37%, ctx=384, majf=0, minf=2 00:10:10.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:10.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.284 issued rwts: total=2855,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.284 00:10:10.284 Run status group 0 (all jobs): 00:10:10.284 READ: bw=67.2MiB/s (70.5MB/s), 11.0MiB/s-22.2MiB/s (11.6MB/s-23.3MB/s), io=67.9MiB (71.2MB), run=1010-1011msec 00:10:10.284 WRITE: bw=71.7MiB/s (75.2MB/s), 11.9MiB/s-23.8MiB/s (12.4MB/s-24.9MB/s), io=72.5MiB (76.0MB), run=1010-1011msec 00:10:10.284 00:10:10.284 Disk stats (read/write): 00:10:10.284 nvme0n1: ios=5169/5399, merge=0/0, ticks=52044/51591, in_queue=103635, util=86.77% 00:10:10.284 nvme0n2: ios=4751/5120, merge=0/0, ticks=51526/47427, in_queue=98953, util=97.66% 00:10:10.284 nvme0n3: ios=2618/2719, merge=0/0, ticks=36599/67942, in_queue=104541, util=98.44% 00:10:10.284 nvme0n4: ios=2048/2559, merge=0/0, ticks=30629/74333, in_queue=104962, util=89.72% 00:10:10.284 16:23:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:10.284 [global] 00:10:10.284 thread=1 00:10:10.284 invalidate=1 00:10:10.284 rw=randwrite 00:10:10.284 time_based=1 00:10:10.284 runtime=1 00:10:10.284 ioengine=libaio 00:10:10.284 direct=1 00:10:10.284 bs=4096 00:10:10.284 iodepth=128 00:10:10.284 norandommap=0 00:10:10.284 numjobs=1 00:10:10.284 00:10:10.284 verify_dump=1 00:10:10.284 verify_backlog=512 00:10:10.284 verify_state_save=0 00:10:10.284 do_verify=1 00:10:10.284 verify=crc32c-intel 00:10:10.284 [job0] 00:10:10.284 filename=/dev/nvme0n1 00:10:10.284 [job1] 00:10:10.284 filename=/dev/nvme0n2 00:10:10.284 [job2] 00:10:10.284 filename=/dev/nvme0n3 00:10:10.284 [job3] 00:10:10.284 filename=/dev/nvme0n4 00:10:10.284 Could not set queue depth (nvme0n1) 00:10:10.284 Could not set queue depth (nvme0n2) 00:10:10.284 Could not set queue depth (nvme0n3) 00:10:10.284 Could not set queue depth (nvme0n4) 00:10:10.541 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.541 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.541 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.541 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.541 fio-3.35 00:10:10.541 Starting 4 threads 00:10:12.016 00:10:12.016 job0: (groupid=0, jobs=1): err= 0: pid=857385: Sat Dec 14 16:23:41 2024 00:10:12.016 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:12.016 slat (nsec): min=1726, max=23997k, avg=143195.83, stdev=1141219.90 00:10:12.016 clat (usec): min=6463, max=67564, avg=17757.30, stdev=12116.49 00:10:12.016 lat (usec): min=6475, max=67589, avg=17900.49, stdev=12235.11 00:10:12.016 clat percentiles (usec): 00:10:12.016 | 1.00th=[ 7111], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10159], 00:10:12.016 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11600], 60.00th=[13960], 00:10:12.016 | 70.00th=[15926], 80.00th=[24249], 90.00th=[39584], 95.00th=[46924], 00:10:12.016 | 99.00th=[52167], 99.50th=[52167], 99.90th=[57410], 99.95th=[65799], 00:10:12.016 | 99.99th=[67634] 00:10:12.016 write: IOPS=3260, BW=12.7MiB/s (13.4MB/s)(12.8MiB/1001msec); 0 zone resets 00:10:12.016 slat (usec): min=2, max=13478, avg=164.37, stdev=834.68 00:10:12.016 clat (usec): min=658, max=92415, avg=22109.87, stdev=20808.15 00:10:12.016 lat (usec): min=669, max=92425, avg=22274.23, stdev=20941.61 00:10:12.016 clat percentiles (usec): 00:10:12.016 | 1.00th=[ 5407], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[ 9896], 00:10:12.016 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[13960], 00:10:12.016 | 70.00th=[23200], 80.00th=[30016], 90.00th=[62653], 95.00th=[72877], 00:10:12.016 | 99.00th=[90702], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:10:12.016 | 99.99th=[92799] 00:10:12.016 bw ( KiB/s): min= 9144, max= 9144, per=14.90%, avg=9144.00, stdev= 0.00, samples=1 00:10:12.016 iops : min= 2286, max= 2286, avg=2286.00, stdev= 0.00, samples=1 00:10:12.016 lat (usec) : 750=0.05% 00:10:12.016 lat (msec) : 10=22.25%, 20=49.48%, 50=20.90%, 100=7.32% 00:10:12.016 cpu : usr=3.20%, sys=4.70%, ctx=315, majf=0, minf=1 00:10:12.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:12.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.016 issued rwts: total=3072,3264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.016 job1: (groupid=0, jobs=1): err= 0: pid=857392: Sat Dec 14 16:23:41 2024 00:10:12.016 read: IOPS=4401, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1045msec) 00:10:12.016 slat (nsec): min=1130, max=29720k, avg=93675.00, stdev=802797.61 00:10:12.016 clat (usec): min=1593, max=53062, avg=13636.61, stdev=8820.54 00:10:12.016 lat (usec): min=1598, max=56934, avg=13730.29, stdev=8861.28 00:10:12.016 clat percentiles (usec): 00:10:12.016 | 1.00th=[ 2868], 5.00th=[ 4752], 10.00th=[ 8717], 20.00th=[ 9372], 00:10:12.016 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10945], 60.00th=[11338], 00:10:12.016 | 70.00th=[13042], 80.00th=[16057], 90.00th=[24249], 95.00th=[31327], 00:10:12.016 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:10:12.016 | 99.99th=[53216] 00:10:12.016 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:10:12.016 slat (usec): min=2, max=15474, avg=108.48, stdev=659.53 00:10:12.016 clat (usec): min=450, max=49783, avg=15159.31, stdev=10284.90 00:10:12.016 lat (usec): min=485, max=49797, avg=15267.79, stdev=10360.18 00:10:12.016 clat percentiles (usec): 00:10:12.016 | 1.00th=[ 2073], 5.00th=[ 3654], 10.00th=[ 6456], 20.00th=[ 7832], 00:10:12.016 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[11076], 60.00th=[13304], 00:10:12.016 | 70.00th=[17433], 80.00th=[24249], 90.00th=[31327], 95.00th=[36963], 00:10:12.016 | 99.00th=[44827], 99.50th=[45351], 99.90th=[49546], 99.95th=[49546], 00:10:12.016 | 99.99th=[49546] 00:10:12.016 bw ( KiB/s): min=17032, max=19832, per=30.03%, avg=18432.00, stdev=1979.90, samples=2 00:10:12.016 iops : min= 4258, max= 4958, avg=4608.00, stdev=494.97, samples=2 00:10:12.016 lat (usec) : 500=0.05%, 750=0.08%, 1000=0.11% 00:10:12.016 lat (msec) : 2=0.31%, 4=3.31%, 10=38.59%, 20=35.95%, 50=20.32% 00:10:12.016 lat (msec) : 100=1.28% 00:10:12.016 cpu : usr=3.26%, sys=5.36%, ctx=337, majf=0, minf=2 00:10:12.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:12.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.016 issued rwts: total=4600,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.016 job2: (groupid=0, jobs=1): err= 0: pid=857410: Sat Dec 14 16:23:41 2024 00:10:12.016 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:10:12.016 slat (nsec): min=1416, max=7116.1k, avg=89178.04, stdev=511852.10 00:10:12.016 clat (usec): min=4749, max=24033, avg=11080.26, stdev=2435.93 00:10:12.016 lat (usec): min=4752, max=24043, avg=11169.44, stdev=2480.73 00:10:12.016 clat percentiles (usec): 00:10:12.016 | 1.00th=[ 7111], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[ 9241], 00:10:12.016 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10552], 60.00th=[11600], 00:10:12.016 | 70.00th=[11863], 80.00th=[12649], 90.00th=[15008], 95.00th=[15139], 00:10:12.016 | 99.00th=[18482], 99.50th=[21103], 99.90th=[23987], 99.95th=[23987], 00:10:12.016 | 99.99th=[23987] 00:10:12.016 write: IOPS=4480, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1005msec); 0 zone resets 00:10:12.016 slat (usec): min=2, max=17342, avg=133.89, stdev=737.51 00:10:12.016 clat (usec): min=694, max=83082, avg=18202.28, stdev=15912.94 00:10:12.016 lat (usec): min=704, max=83090, avg=18336.18, stdev=16010.56 00:10:12.016 clat percentiles (usec): 00:10:12.016 | 1.00th=[ 1582], 5.00th=[ 6718], 10.00th=[ 8717], 20.00th=[ 9241], 00:10:12.016 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[11863], 00:10:12.016 | 70.00th=[17695], 80.00th=[24511], 90.00th=[43779], 95.00th=[56361], 00:10:12.016 | 99.00th=[77071], 99.50th=[78119], 99.90th=[83362], 99.95th=[83362], 00:10:12.016 | 99.99th=[83362] 00:10:12.016 bw ( KiB/s): min=10432, max=24576, per=28.52%, avg=17504.00, stdev=10001.32, samples=2 00:10:12.016 iops : min= 2608, max= 6144, avg=4376.00, stdev=2500.33, samples=2 00:10:12.016 lat (usec) : 750=0.07% 00:10:12.016 lat (msec) : 2=0.85%, 4=0.80%, 10=43.83%, 20=39.39%, 50=11.40% 00:10:12.016 lat (msec) : 100=3.66% 00:10:12.016 cpu : usr=2.69%, sys=5.38%, ctx=586, majf=0, minf=1 00:10:12.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:12.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.016 issued rwts: total=4096,4503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.016 job3: (groupid=0, jobs=1): err= 0: pid=857416: Sat Dec 14 16:23:41 2024 00:10:12.016 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:10:12.016 slat (nsec): min=1456, max=14804k, avg=124609.61, stdev=806424.71 00:10:12.016 clat (usec): min=5092, max=76034, avg=14132.50, stdev=7456.03 00:10:12.016 lat (usec): min=5098, max=76044, avg=14257.11, stdev=7558.83 00:10:12.016 clat percentiles (usec): 00:10:12.016 | 1.00th=[ 7242], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:10:12.016 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11994], 60.00th=[12518], 00:10:12.016 | 70.00th=[13435], 80.00th=[16319], 90.00th=[19530], 95.00th=[25297], 00:10:12.016 | 99.00th=[57410], 99.50th=[66847], 99.90th=[76022], 99.95th=[76022], 00:10:12.016 | 99.99th=[76022] 00:10:12.016 write: IOPS=3652, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1002msec); 0 zone resets 00:10:12.016 slat (usec): min=2, max=23189, avg=142.72, stdev=800.82 00:10:12.016 clat (usec): min=334, max=82875, avg=20751.87, stdev=15325.43 00:10:12.016 lat (usec): min=393, max=82896, avg=20894.60, stdev=15407.32 00:10:12.016 clat percentiles (usec): 00:10:12.016 | 1.00th=[ 2638], 5.00th=[ 5342], 10.00th=[ 7832], 20.00th=[ 9634], 00:10:12.016 | 30.00th=[11076], 40.00th=[11600], 50.00th=[13304], 60.00th=[22938], 00:10:12.016 | 70.00th=[26608], 80.00th=[31851], 90.00th=[38536], 95.00th=[51643], 00:10:12.016 | 99.00th=[77071], 99.50th=[79168], 99.90th=[83362], 99.95th=[83362], 00:10:12.016 | 99.99th=[83362] 00:10:12.016 bw ( KiB/s): min=11568, max=11568, per=18.85%, avg=11568.00, stdev= 0.00, samples=1 00:10:12.016 iops : min= 2892, max= 2892, avg=2892.00, stdev= 0.00, samples=1 00:10:12.016 lat (usec) : 500=0.06%, 1000=0.01% 00:10:12.016 lat (msec) : 2=0.37%, 4=1.24%, 10=16.47%, 20=56.71%, 50=21.71% 00:10:12.016 lat (msec) : 100=3.42% 00:10:12.016 cpu : usr=3.30%, sys=5.29%, ctx=338, majf=0, minf=1 00:10:12.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:12.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.016 issued rwts: total=3584,3660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.016 00:10:12.016 Run status group 0 (all jobs): 00:10:12.016 READ: bw=57.4MiB/s (60.2MB/s), 12.0MiB/s-17.2MiB/s (12.6MB/s-18.0MB/s), io=60.0MiB (62.9MB), run=1001-1045msec 00:10:12.016 WRITE: bw=59.9MiB/s (62.9MB/s), 12.7MiB/s-17.5MiB/s (13.4MB/s-18.4MB/s), io=62.6MiB (65.7MB), run=1001-1045msec 00:10:12.016 00:10:12.016 Disk stats (read/write): 00:10:12.016 nvme0n1: ios=2075/2439, merge=0/0, ticks=22502/30402, in_queue=52904, util=97.39% 00:10:12.016 nvme0n2: ios=4101/4159, merge=0/0, ticks=42321/50999, in_queue=93320, util=86.80% 00:10:12.016 nvme0n3: ios=3642/4096, merge=0/0, ticks=14456/31582, in_queue=46038, util=98.02% 00:10:12.016 nvme0n4: ios=2601/3072, merge=0/0, ticks=38112/64240, in_queue=102352, util=98.11% 00:10:12.016 16:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:12.016 16:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=857612 00:10:12.016 16:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:12.016 16:23:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:12.016 [global] 00:10:12.016 thread=1 00:10:12.016 invalidate=1 00:10:12.016 rw=read 00:10:12.016 time_based=1 00:10:12.016 runtime=10 00:10:12.016 ioengine=libaio 00:10:12.016 direct=1 00:10:12.016 bs=4096 00:10:12.016 iodepth=1 00:10:12.016 norandommap=1 00:10:12.016 numjobs=1 00:10:12.016 00:10:12.016 [job0] 00:10:12.016 filename=/dev/nvme0n1 00:10:12.016 [job1] 00:10:12.016 filename=/dev/nvme0n2 00:10:12.016 [job2] 00:10:12.016 filename=/dev/nvme0n3 00:10:12.016 [job3] 00:10:12.016 filename=/dev/nvme0n4 00:10:12.017 Could not set queue depth (nvme0n1) 00:10:12.017 Could not set queue depth (nvme0n2) 00:10:12.017 Could not set queue depth (nvme0n3) 00:10:12.017 Could not set queue depth (nvme0n4) 00:10:12.017 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.017 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.017 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.017 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.017 fio-3.35 00:10:12.017 Starting 4 threads 00:10:15.325 16:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:15.325 16:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:15.325 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=278528, buflen=4096 00:10:15.325 fio: pid=857935, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:15.325 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:10:15.325 fio: pid=857929, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:15.325 16:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:15.325 16:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:15.325 16:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:15.325 16:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:15.325 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=315392, buflen=4096 00:10:15.325 fio: pid=857895, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:15.583 16:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:15.583 16:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:15.583 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4517888, buflen=4096 00:10:15.583 fio: pid=857912, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:15.583 00:10:15.583 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857895: Sat Dec 14 16:23:45 2024 00:10:15.583 read: IOPS=24, BW=97.6KiB/s (99.9kB/s)(308KiB/3157msec) 00:10:15.583 slat (usec): min=12, max=10784, avg=246.33, stdev=1425.74 00:10:15.583 clat (usec): min=303, max=42026, avg=40465.09, stdev=4638.91 00:10:15.583 lat (usec): min=343, max=51967, avg=40714.32, stdev=4888.72 00:10:15.583 clat percentiles (usec): 00:10:15.583 | 1.00th=[ 306], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:15.583 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:15.583 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:15.583 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:15.583 | 99.99th=[42206] 00:10:15.583 bw ( KiB/s): min= 93, max= 104, per=6.22%, avg=98.17, stdev= 4.67, samples=6 00:10:15.583 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:10:15.583 lat (usec) : 500=1.28% 00:10:15.583 lat (msec) : 50=97.44% 00:10:15.583 cpu : usr=0.13%, sys=0.00%, ctx=81, majf=0, minf=1 00:10:15.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.583 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.583 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.583 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857912: Sat Dec 14 16:23:45 2024 00:10:15.583 read: IOPS=329, BW=1317KiB/s (1349kB/s)(4412KiB/3350msec) 00:10:15.583 slat (nsec): min=6984, max=64247, avg=9155.63, stdev=4981.33 00:10:15.583 clat (usec): min=168, max=41982, avg=3012.56, stdev=10335.66 00:10:15.583 lat (usec): min=175, max=42005, avg=3021.70, stdev=10339.83 00:10:15.583 clat percentiles (usec): 00:10:15.583 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:10:15.583 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:10:15.583 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[41157], 00:10:15.583 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:15.583 | 99.99th=[42206] 00:10:15.583 bw ( KiB/s): min= 96, max= 8264, per=92.57%, avg=1459.33, stdev=3333.59, samples=6 00:10:15.583 iops : min= 24, max= 2066, avg=364.83, stdev=833.40, samples=6 00:10:15.583 lat (usec) : 250=92.66%, 500=0.27% 00:10:15.583 lat (msec) : 4=0.09%, 50=6.88% 00:10:15.583 cpu : usr=0.03%, sys=0.72%, ctx=1107, majf=0, minf=1 00:10:15.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.583 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.583 issued rwts: total=1104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.583 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857929: Sat Dec 14 16:23:45 2024 00:10:15.583 read: IOPS=25, BW=98.9KiB/s (101kB/s)(288KiB/2913msec) 00:10:15.583 slat (nsec): min=9443, max=36542, avg=21510.12, stdev=4102.68 00:10:15.583 clat (usec): min=333, max=42118, avg=40135.99, stdev=6779.01 00:10:15.583 lat (usec): min=358, max=42140, avg=40157.49, stdev=6777.53 00:10:15.583 clat percentiles (usec): 00:10:15.583 | 1.00th=[ 334], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:15.583 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:15.583 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:15.583 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:15.583 | 99.99th=[42206] 00:10:15.583 bw ( KiB/s): min= 96, max= 104, per=6.28%, avg=99.20, stdev= 4.38, samples=5 00:10:15.583 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:10:15.583 lat (usec) : 500=2.74% 00:10:15.583 lat (msec) : 50=95.89% 00:10:15.583 cpu : usr=0.10%, sys=0.00%, ctx=73, majf=0, minf=2 00:10:15.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.583 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.583 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.583 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857935: Sat Dec 14 16:23:45 2024 00:10:15.583 read: IOPS=25, BW=99.4KiB/s (102kB/s)(272KiB/2736msec) 00:10:15.583 slat (nsec): min=9621, max=41865, avg=22319.94, stdev=3128.89 00:10:15.583 clat (usec): min=387, max=42121, avg=40071.55, stdev=6964.30 00:10:15.583 lat (usec): min=414, max=42143, avg=40093.86, stdev=6962.25 00:10:15.583 clat percentiles (usec): 00:10:15.583 | 1.00th=[ 388], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:15.583 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:15.583 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:15.583 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:15.583 | 99.99th=[42206] 00:10:15.583 bw ( KiB/s): min= 96, max= 104, per=6.28%, avg=99.20, stdev= 4.38, samples=5 00:10:15.583 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:10:15.583 lat (usec) : 500=2.90% 00:10:15.583 lat (msec) : 50=95.65% 00:10:15.583 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=2 00:10:15.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.583 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.583 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.583 00:10:15.583 Run status group 0 (all jobs): 00:10:15.583 READ: bw=1576KiB/s (1614kB/s), 97.6KiB/s-1317KiB/s (99.9kB/s-1349kB/s), io=5280KiB (5407kB), run=2736-3350msec 00:10:15.583 00:10:15.583 Disk stats (read/write): 00:10:15.583 nvme0n1: ios=111/0, merge=0/0, ticks=3992/0, in_queue=3992, util=99.01% 00:10:15.583 nvme0n2: ios=1134/0, merge=0/0, ticks=3535/0, in_queue=3535, util=99.60% 00:10:15.583 nvme0n3: ios=71/0, merge=0/0, ticks=2851/0, in_queue=2851, util=96.55% 00:10:15.583 nvme0n4: ios=65/0, merge=0/0, ticks=2604/0, in_queue=2604, util=96.45% 00:10:15.840 16:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:15.840 16:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:16.097 16:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.097 16:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:16.354 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.354 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:16.354 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.354 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:16.612 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:16.612 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 857612 00:10:16.612 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:16.612 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.868 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.868 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:16.868 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:16.868 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.868 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:16.868 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.868 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:16.868 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:16.868 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:16.868 nvmf hotplug test: fio failed as expected 00:10:16.868 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.126 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:17.126 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:17.126 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:17.126 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:17.126 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:17.126 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.126 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:17.126 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.126 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:17.126 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.126 16:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.126 rmmod nvme_tcp 00:10:17.126 rmmod nvme_fabrics 00:10:17.126 rmmod nvme_keyring 00:10:17.126 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.126 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:17.126 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:17.126 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 854950 ']' 00:10:17.126 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 854950 00:10:17.126 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 854950 ']' 00:10:17.126 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 854950 00:10:17.127 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:17.127 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.127 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 854950 00:10:17.127 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.127 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.127 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 854950' 00:10:17.127 killing process with pid 854950 00:10:17.127 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 854950 00:10:17.127 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 854950 00:10:17.385 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.385 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.385 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.385 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:17.385 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:17.385 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.386 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.386 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.386 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.386 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.386 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.386 16:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.290 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:19.290 00:10:19.290 real 0m26.828s 00:10:19.290 user 1m47.605s 00:10:19.290 sys 0m7.929s 00:10:19.290 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.290 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.290 ************************************ 00:10:19.290 END TEST nvmf_fio_target 00:10:19.290 ************************************ 00:10:19.290 16:23:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:19.290 16:23:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.290 16:23:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.290 16:23:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.549 ************************************ 00:10:19.549 START TEST nvmf_bdevio 00:10:19.549 ************************************ 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:19.549 * Looking for test storage... 00:10:19.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:19.549 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:19.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.550 --rc genhtml_branch_coverage=1 00:10:19.550 --rc genhtml_function_coverage=1 00:10:19.550 --rc genhtml_legend=1 00:10:19.550 --rc geninfo_all_blocks=1 00:10:19.550 --rc geninfo_unexecuted_blocks=1 00:10:19.550 00:10:19.550 ' 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:19.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.550 --rc genhtml_branch_coverage=1 00:10:19.550 --rc genhtml_function_coverage=1 00:10:19.550 --rc genhtml_legend=1 00:10:19.550 --rc geninfo_all_blocks=1 00:10:19.550 --rc geninfo_unexecuted_blocks=1 00:10:19.550 00:10:19.550 ' 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:19.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.550 --rc genhtml_branch_coverage=1 00:10:19.550 --rc genhtml_function_coverage=1 00:10:19.550 --rc genhtml_legend=1 00:10:19.550 --rc geninfo_all_blocks=1 00:10:19.550 --rc geninfo_unexecuted_blocks=1 00:10:19.550 00:10:19.550 ' 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:19.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.550 --rc genhtml_branch_coverage=1 00:10:19.550 --rc genhtml_function_coverage=1 00:10:19.550 --rc genhtml_legend=1 00:10:19.550 --rc geninfo_all_blocks=1 00:10:19.550 --rc geninfo_unexecuted_blocks=1 00:10:19.550 00:10:19.550 ' 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:19.550 16:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:26.119 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:26.119 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.119 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:26.120 Found net devices under 0000:af:00.0: cvl_0_0 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:26.120 Found net devices under 0000:af:00.1: cvl_0_1 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:26.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:10:26.120 00:10:26.120 --- 10.0.0.2 ping statistics --- 00:10:26.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.120 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:10:26.120 00:10:26.120 --- 10.0.0.1 ping statistics --- 00:10:26.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.120 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=862140 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 862140 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 862140 ']' 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.120 [2024-12-14 16:23:55.640171] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:26.120 [2024-12-14 16:23:55.640215] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.120 [2024-12-14 16:23:55.720125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.120 [2024-12-14 16:23:55.742541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.120 [2024-12-14 16:23:55.742602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.120 [2024-12-14 16:23:55.742610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.120 [2024-12-14 16:23:55.742616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.120 [2024-12-14 16:23:55.742621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.120 [2024-12-14 16:23:55.744146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:26.120 [2024-12-14 16:23:55.744256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:26.120 [2024-12-14 16:23:55.744363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.120 [2024-12-14 16:23:55.744364] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.120 [2024-12-14 16:23:55.887891] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.120 Malloc0 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:26.120 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.121 [2024-12-14 16:23:55.952190] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:26.121 { 00:10:26.121 "params": { 00:10:26.121 "name": "Nvme$subsystem", 00:10:26.121 "trtype": "$TEST_TRANSPORT", 00:10:26.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:26.121 "adrfam": "ipv4", 00:10:26.121 "trsvcid": "$NVMF_PORT", 00:10:26.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:26.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:26.121 "hdgst": ${hdgst:-false}, 00:10:26.121 "ddgst": ${ddgst:-false} 00:10:26.121 }, 00:10:26.121 "method": "bdev_nvme_attach_controller" 00:10:26.121 } 00:10:26.121 EOF 00:10:26.121 )") 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:26.121 16:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:26.121 "params": { 00:10:26.121 "name": "Nvme1", 00:10:26.121 "trtype": "tcp", 00:10:26.121 "traddr": "10.0.0.2", 00:10:26.121 "adrfam": "ipv4", 00:10:26.121 "trsvcid": "4420", 00:10:26.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:26.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:26.121 "hdgst": false, 00:10:26.121 "ddgst": false 00:10:26.121 }, 00:10:26.121 "method": "bdev_nvme_attach_controller" 00:10:26.121 }' 00:10:26.121 [2024-12-14 16:23:56.005274] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:26.121 [2024-12-14 16:23:56.005317] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862285 ] 00:10:26.121 [2024-12-14 16:23:56.080736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:26.121 [2024-12-14 16:23:56.105886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.121 [2024-12-14 16:23:56.105940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.121 [2024-12-14 16:23:56.105941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.379 I/O targets: 00:10:26.379 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:26.379 00:10:26.379 00:10:26.379 CUnit - A unit testing framework for C - Version 2.1-3 00:10:26.379 http://cunit.sourceforge.net/ 00:10:26.379 00:10:26.379 00:10:26.379 Suite: bdevio tests on: Nvme1n1 00:10:26.379 Test: blockdev write read block ...passed 00:10:26.379 Test: blockdev write zeroes read block ...passed 00:10:26.379 Test: blockdev write zeroes read no split ...passed 00:10:26.379 Test: blockdev write zeroes read split ...passed 00:10:26.379 Test: blockdev write zeroes read split partial ...passed 00:10:26.379 Test: blockdev reset ...[2024-12-14 16:23:56.420438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:26.379 [2024-12-14 16:23:56.420500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1972340 (9): Bad file descriptor 00:10:26.636 [2024-12-14 16:23:56.530035] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:26.636 passed 00:10:26.636 Test: blockdev write read 8 blocks ...passed 00:10:26.636 Test: blockdev write read size > 128k ...passed 00:10:26.636 Test: blockdev write read invalid size ...passed 00:10:26.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.636 Test: blockdev write read max offset ...passed 00:10:26.636 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.636 Test: blockdev writev readv 8 blocks ...passed 00:10:26.636 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.895 Test: blockdev writev readv block ...passed 00:10:26.895 Test: blockdev writev readv size > 128k ...passed 00:10:26.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.895 Test: blockdev comparev and writev ...[2024-12-14 16:23:56.741169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.895 [2024-12-14 16:23:56.741204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:26.895 [2024-12-14 16:23:56.741218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.895 [2024-12-14 16:23:56.741226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:26.895 [2024-12-14 16:23:56.741460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.895 [2024-12-14 16:23:56.741471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:26.895 [2024-12-14 16:23:56.741483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.895 [2024-12-14 16:23:56.741494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:26.895 [2024-12-14 16:23:56.741713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.895 [2024-12-14 16:23:56.741724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:26.895 [2024-12-14 16:23:56.741737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.895 [2024-12-14 16:23:56.741745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:26.895 [2024-12-14 16:23:56.741981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.895 [2024-12-14 16:23:56.741992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:26.895 [2024-12-14 16:23:56.742004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:26.895 [2024-12-14 16:23:56.742011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:26.895 passed 00:10:26.895 Test: blockdev nvme passthru rw ...passed 00:10:26.895 Test: blockdev nvme passthru vendor specific ...[2024-12-14 16:23:56.823929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:26.895 [2024-12-14 16:23:56.823946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:26.895 [2024-12-14 16:23:56.824047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:26.895 [2024-12-14 16:23:56.824057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:26.895 [2024-12-14 16:23:56.824153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:26.895 [2024-12-14 16:23:56.824163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:26.895 [2024-12-14 16:23:56.824261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:26.895 [2024-12-14 16:23:56.824271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:26.895 passed 00:10:26.895 Test: blockdev nvme admin passthru ...passed 00:10:26.895 Test: blockdev copy ...passed 00:10:26.895 00:10:26.895 Run Summary: Type Total Ran Passed Failed Inactive 00:10:26.895 suites 1 1 n/a 0 0 00:10:26.895 tests 23 23 23 0 0 00:10:26.895 asserts 152 152 152 0 n/a 00:10:26.895 00:10:26.895 Elapsed time = 1.253 seconds 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:27.153 rmmod nvme_tcp 00:10:27.153 rmmod nvme_fabrics 00:10:27.153 rmmod nvme_keyring 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 862140 ']' 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 862140 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 862140 ']' 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 862140 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 862140 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 862140' 00:10:27.153 killing process with pid 862140 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 862140 00:10:27.153 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 862140 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.412 16:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.316 16:23:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:29.316 00:10:29.316 real 0m9.992s 00:10:29.316 user 0m9.964s 00:10:29.316 sys 0m5.025s 00:10:29.316 16:23:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.317 16:23:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.317 ************************************ 00:10:29.317 END TEST nvmf_bdevio 00:10:29.317 ************************************ 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:29.576 00:10:29.576 real 4m33.397s 00:10:29.576 user 10m22.553s 00:10:29.576 sys 1m37.314s 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.576 ************************************ 00:10:29.576 END TEST nvmf_target_core 00:10:29.576 ************************************ 00:10:29.576 16:23:59 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:29.576 16:23:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.576 16:23:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.576 16:23:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:29.576 ************************************ 00:10:29.576 START TEST nvmf_target_extra 00:10:29.576 ************************************ 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:29.576 * Looking for test storage... 00:10:29.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.576 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.835 --rc genhtml_branch_coverage=1 00:10:29.835 --rc genhtml_function_coverage=1 00:10:29.835 --rc genhtml_legend=1 00:10:29.835 --rc geninfo_all_blocks=1 00:10:29.835 --rc geninfo_unexecuted_blocks=1 00:10:29.835 00:10:29.835 ' 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.835 --rc genhtml_branch_coverage=1 00:10:29.835 --rc genhtml_function_coverage=1 00:10:29.835 --rc genhtml_legend=1 00:10:29.835 --rc geninfo_all_blocks=1 00:10:29.835 --rc geninfo_unexecuted_blocks=1 00:10:29.835 00:10:29.835 ' 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.835 --rc genhtml_branch_coverage=1 00:10:29.835 --rc genhtml_function_coverage=1 00:10:29.835 --rc genhtml_legend=1 00:10:29.835 --rc geninfo_all_blocks=1 00:10:29.835 --rc geninfo_unexecuted_blocks=1 00:10:29.835 00:10:29.835 ' 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.835 --rc genhtml_branch_coverage=1 00:10:29.835 --rc genhtml_function_coverage=1 00:10:29.835 --rc genhtml_legend=1 00:10:29.835 --rc geninfo_all_blocks=1 00:10:29.835 --rc geninfo_unexecuted_blocks=1 00:10:29.835 00:10:29.835 ' 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.835 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:29.836 ************************************ 00:10:29.836 START TEST nvmf_example 00:10:29.836 ************************************ 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:29.836 * Looking for test storage... 00:10:29.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.836 --rc genhtml_branch_coverage=1 00:10:29.836 --rc genhtml_function_coverage=1 00:10:29.836 --rc genhtml_legend=1 00:10:29.836 --rc geninfo_all_blocks=1 00:10:29.836 --rc geninfo_unexecuted_blocks=1 00:10:29.836 00:10:29.836 ' 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.836 --rc genhtml_branch_coverage=1 00:10:29.836 --rc genhtml_function_coverage=1 00:10:29.836 --rc genhtml_legend=1 00:10:29.836 --rc geninfo_all_blocks=1 00:10:29.836 --rc geninfo_unexecuted_blocks=1 00:10:29.836 00:10:29.836 ' 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.836 --rc genhtml_branch_coverage=1 00:10:29.836 --rc genhtml_function_coverage=1 00:10:29.836 --rc genhtml_legend=1 00:10:29.836 --rc geninfo_all_blocks=1 00:10:29.836 --rc geninfo_unexecuted_blocks=1 00:10:29.836 00:10:29.836 ' 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.836 --rc genhtml_branch_coverage=1 00:10:29.836 --rc genhtml_function_coverage=1 00:10:29.836 --rc genhtml_legend=1 00:10:29.836 --rc geninfo_all_blocks=1 00:10:29.836 --rc geninfo_unexecuted_blocks=1 00:10:29.836 00:10:29.836 ' 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.836 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.095 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.096 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.667 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:36.668 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:36.668 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:36.668 Found net devices under 0000:af:00.0: cvl_0_0 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:36.668 Found net devices under 0000:af:00.1: cvl_0_1 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:36.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:10:36.668 00:10:36.668 --- 10.0.0.2 ping statistics --- 00:10:36.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.668 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:10:36.668 00:10:36.668 --- 10.0.0.1 ping statistics --- 00:10:36.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.668 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=866251 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 866251 00:10:36.668 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 866251 ']' 00:10:36.669 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.669 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.669 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.669 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.669 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:36.926 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:49.115 Initializing NVMe Controllers 00:10:49.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:49.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:49.115 Initialization complete. Launching workers. 00:10:49.115 ======================================================== 00:10:49.115 Latency(us) 00:10:49.115 Device Information : IOPS MiB/s Average min max 00:10:49.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18459.90 72.11 3467.34 479.36 21996.67 00:10:49.115 ======================================================== 00:10:49.115 Total : 18459.90 72.11 3467.34 479.36 21996.67 00:10:49.115 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:49.115 rmmod nvme_tcp 00:10:49.115 rmmod nvme_fabrics 00:10:49.115 rmmod nvme_keyring 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 866251 ']' 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 866251 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 866251 ']' 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 866251 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 866251 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 866251' 00:10:49.115 killing process with pid 866251 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 866251 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 866251 00:10:49.115 nvmf threads initialize successfully 00:10:49.115 bdev subsystem init successfully 00:10:49.115 created a nvmf target service 00:10:49.115 create targets's poll groups done 00:10:49.115 all subsystems of target started 00:10:49.115 nvmf target is running 00:10:49.115 all subsystems of target stopped 00:10:49.115 destroy targets's poll groups done 00:10:49.115 destroyed the nvmf target service 00:10:49.115 bdev subsystem finish successfully 00:10:49.115 nvmf threads destroy successfully 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:49.115 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:49.116 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:49.116 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:49.116 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:49.116 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:49.116 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.116 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.116 16:24:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.683 00:10:49.683 real 0m19.795s 00:10:49.683 user 0m46.032s 00:10:49.683 sys 0m5.927s 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.683 ************************************ 00:10:49.683 END TEST nvmf_example 00:10:49.683 ************************************ 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:49.683 ************************************ 00:10:49.683 START TEST nvmf_filesystem 00:10:49.683 ************************************ 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:49.683 * Looking for test storage... 00:10:49.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.683 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.684 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.946 --rc genhtml_branch_coverage=1 00:10:49.946 --rc genhtml_function_coverage=1 00:10:49.946 --rc genhtml_legend=1 00:10:49.946 --rc geninfo_all_blocks=1 00:10:49.946 --rc geninfo_unexecuted_blocks=1 00:10:49.946 00:10:49.946 ' 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.946 --rc genhtml_branch_coverage=1 00:10:49.946 --rc genhtml_function_coverage=1 00:10:49.946 --rc genhtml_legend=1 00:10:49.946 --rc geninfo_all_blocks=1 00:10:49.946 --rc geninfo_unexecuted_blocks=1 00:10:49.946 00:10:49.946 ' 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.946 --rc genhtml_branch_coverage=1 00:10:49.946 --rc genhtml_function_coverage=1 00:10:49.946 --rc genhtml_legend=1 00:10:49.946 --rc geninfo_all_blocks=1 00:10:49.946 --rc geninfo_unexecuted_blocks=1 00:10:49.946 00:10:49.946 ' 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.946 --rc genhtml_branch_coverage=1 00:10:49.946 --rc genhtml_function_coverage=1 00:10:49.946 --rc genhtml_legend=1 00:10:49.946 --rc geninfo_all_blocks=1 00:10:49.946 --rc geninfo_unexecuted_blocks=1 00:10:49.946 00:10:49.946 ' 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:49.946 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:49.947 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:49.947 #define SPDK_CONFIG_H 00:10:49.947 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:49.947 #define SPDK_CONFIG_APPS 1 00:10:49.947 #define SPDK_CONFIG_ARCH native 00:10:49.947 #undef SPDK_CONFIG_ASAN 00:10:49.947 #undef SPDK_CONFIG_AVAHI 00:10:49.947 #undef SPDK_CONFIG_CET 00:10:49.947 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:49.947 #define SPDK_CONFIG_COVERAGE 1 00:10:49.947 #define SPDK_CONFIG_CROSS_PREFIX 00:10:49.947 #undef SPDK_CONFIG_CRYPTO 00:10:49.947 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:49.947 #undef SPDK_CONFIG_CUSTOMOCF 00:10:49.947 #undef SPDK_CONFIG_DAOS 00:10:49.947 #define SPDK_CONFIG_DAOS_DIR 00:10:49.947 #define SPDK_CONFIG_DEBUG 1 00:10:49.947 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:49.947 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:49.947 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:49.947 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:49.947 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:49.947 #undef SPDK_CONFIG_DPDK_UADK 00:10:49.947 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:49.947 #define SPDK_CONFIG_EXAMPLES 1 00:10:49.947 #undef SPDK_CONFIG_FC 00:10:49.947 #define SPDK_CONFIG_FC_PATH 00:10:49.947 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:49.948 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:49.948 #define SPDK_CONFIG_FSDEV 1 00:10:49.948 #undef SPDK_CONFIG_FUSE 00:10:49.948 #undef SPDK_CONFIG_FUZZER 00:10:49.948 #define SPDK_CONFIG_FUZZER_LIB 00:10:49.948 #undef SPDK_CONFIG_GOLANG 00:10:49.948 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:49.948 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:49.948 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:49.948 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:49.948 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:49.948 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:49.948 #undef SPDK_CONFIG_HAVE_LZ4 00:10:49.948 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:49.948 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:49.948 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:49.948 #define SPDK_CONFIG_IDXD 1 00:10:49.948 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:49.948 #undef SPDK_CONFIG_IPSEC_MB 00:10:49.948 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:49.948 #define SPDK_CONFIG_ISAL 1 00:10:49.948 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:49.948 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:49.948 #define SPDK_CONFIG_LIBDIR 00:10:49.948 #undef SPDK_CONFIG_LTO 00:10:49.948 #define SPDK_CONFIG_MAX_LCORES 128 00:10:49.948 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:49.948 #define SPDK_CONFIG_NVME_CUSE 1 00:10:49.948 #undef SPDK_CONFIG_OCF 00:10:49.948 #define SPDK_CONFIG_OCF_PATH 00:10:49.948 #define SPDK_CONFIG_OPENSSL_PATH 00:10:49.948 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:49.948 #define SPDK_CONFIG_PGO_DIR 00:10:49.948 #undef SPDK_CONFIG_PGO_USE 00:10:49.948 #define SPDK_CONFIG_PREFIX /usr/local 00:10:49.948 #undef SPDK_CONFIG_RAID5F 00:10:49.948 #undef SPDK_CONFIG_RBD 00:10:49.948 #define SPDK_CONFIG_RDMA 1 00:10:49.948 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:49.948 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:49.948 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:49.948 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:49.948 #define SPDK_CONFIG_SHARED 1 00:10:49.948 #undef SPDK_CONFIG_SMA 00:10:49.948 #define SPDK_CONFIG_TESTS 1 00:10:49.948 #undef SPDK_CONFIG_TSAN 00:10:49.948 #define SPDK_CONFIG_UBLK 1 00:10:49.948 #define SPDK_CONFIG_UBSAN 1 00:10:49.948 #undef SPDK_CONFIG_UNIT_TESTS 00:10:49.948 #undef SPDK_CONFIG_URING 00:10:49.948 #define SPDK_CONFIG_URING_PATH 00:10:49.948 #undef SPDK_CONFIG_URING_ZNS 00:10:49.948 #undef SPDK_CONFIG_USDT 00:10:49.948 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:49.948 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:49.948 #define SPDK_CONFIG_VFIO_USER 1 00:10:49.948 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:49.948 #define SPDK_CONFIG_VHOST 1 00:10:49.948 #define SPDK_CONFIG_VIRTIO 1 00:10:49.948 #undef SPDK_CONFIG_VTUNE 00:10:49.948 #define SPDK_CONFIG_VTUNE_DIR 00:10:49.948 #define SPDK_CONFIG_WERROR 1 00:10:49.948 #define SPDK_CONFIG_WPDK_DIR 00:10:49.948 #undef SPDK_CONFIG_XNVME 00:10:49.948 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:49.948 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:49.949 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 868980 ]] 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 868980 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.I1tAv9 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.I1tAv9/tests/target /tmp/spdk.I1tAv9 00:10:49.950 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88098230272 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552417792 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7454187520 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766175744 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776206848 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087474688 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110486016 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47776002048 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776210944 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=208896 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:49.951 * Looking for test storage... 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88098230272 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9668780032 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.951 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:49.952 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:49.952 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.952 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.952 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:49.952 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:49.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.952 --rc genhtml_branch_coverage=1 00:10:49.952 --rc genhtml_function_coverage=1 00:10:49.952 --rc genhtml_legend=1 00:10:49.952 --rc geninfo_all_blocks=1 00:10:49.952 --rc geninfo_unexecuted_blocks=1 00:10:49.952 00:10:49.952 ' 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:49.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.952 --rc genhtml_branch_coverage=1 00:10:49.952 --rc genhtml_function_coverage=1 00:10:49.952 --rc genhtml_legend=1 00:10:49.952 --rc geninfo_all_blocks=1 00:10:49.952 --rc geninfo_unexecuted_blocks=1 00:10:49.952 00:10:49.952 ' 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:49.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.952 --rc genhtml_branch_coverage=1 00:10:49.952 --rc genhtml_function_coverage=1 00:10:49.952 --rc genhtml_legend=1 00:10:49.952 --rc geninfo_all_blocks=1 00:10:49.952 --rc geninfo_unexecuted_blocks=1 00:10:49.952 00:10:49.952 ' 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:49.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.952 --rc genhtml_branch_coverage=1 00:10:49.952 --rc genhtml_function_coverage=1 00:10:49.952 --rc genhtml_legend=1 00:10:49.952 --rc geninfo_all_blocks=1 00:10:49.952 --rc geninfo_unexecuted_blocks=1 00:10:49.952 00:10:49.952 ' 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.952 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.211 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.212 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:56.777 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:56.777 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:56.777 Found net devices under 0000:af:00.0: cvl_0_0 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:56.777 Found net devices under 0000:af:00.1: cvl_0_1 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.777 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:10:56.778 00:10:56.778 --- 10.0.0.2 ping statistics --- 00:10:56.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.778 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:10:56.778 00:10:56.778 --- 10.0.0.1 ping statistics --- 00:10:56.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.778 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.778 ************************************ 00:10:56.778 START TEST nvmf_filesystem_no_in_capsule 00:10:56.778 ************************************ 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:56.778 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=871976 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 871976 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 871976 ']' 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.778 [2024-12-14 16:24:26.059725] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:56.778 [2024-12-14 16:24:26.059770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.778 [2024-12-14 16:24:26.138624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.778 [2024-12-14 16:24:26.162709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.778 [2024-12-14 16:24:26.162742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.778 [2024-12-14 16:24:26.162749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.778 [2024-12-14 16:24:26.162755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.778 [2024-12-14 16:24:26.162760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.778 [2024-12-14 16:24:26.164207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.778 [2024-12-14 16:24:26.164316] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.778 [2024-12-14 16:24:26.164428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.778 [2024-12-14 16:24:26.164429] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.778 [2024-12-14 16:24:26.304956] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.778 Malloc1 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.778 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.779 [2024-12-14 16:24:26.468651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:56.779 { 00:10:56.779 "name": "Malloc1", 00:10:56.779 "aliases": [ 00:10:56.779 "27c73d3a-35e3-47b8-9da5-07bf2f0316bf" 00:10:56.779 ], 00:10:56.779 "product_name": "Malloc disk", 00:10:56.779 "block_size": 512, 00:10:56.779 "num_blocks": 1048576, 00:10:56.779 "uuid": "27c73d3a-35e3-47b8-9da5-07bf2f0316bf", 00:10:56.779 "assigned_rate_limits": { 00:10:56.779 "rw_ios_per_sec": 0, 00:10:56.779 "rw_mbytes_per_sec": 0, 00:10:56.779 "r_mbytes_per_sec": 0, 00:10:56.779 "w_mbytes_per_sec": 0 00:10:56.779 }, 00:10:56.779 "claimed": true, 00:10:56.779 "claim_type": "exclusive_write", 00:10:56.779 "zoned": false, 00:10:56.779 "supported_io_types": { 00:10:56.779 "read": true, 00:10:56.779 "write": true, 00:10:56.779 "unmap": true, 00:10:56.779 "flush": true, 00:10:56.779 "reset": true, 00:10:56.779 "nvme_admin": false, 00:10:56.779 "nvme_io": false, 00:10:56.779 "nvme_io_md": false, 00:10:56.779 "write_zeroes": true, 00:10:56.779 "zcopy": true, 00:10:56.779 "get_zone_info": false, 00:10:56.779 "zone_management": false, 00:10:56.779 "zone_append": false, 00:10:56.779 "compare": false, 00:10:56.779 "compare_and_write": false, 00:10:56.779 "abort": true, 00:10:56.779 "seek_hole": false, 00:10:56.779 "seek_data": false, 00:10:56.779 "copy": true, 00:10:56.779 "nvme_iov_md": false 00:10:56.779 }, 00:10:56.779 "memory_domains": [ 00:10:56.779 { 00:10:56.779 "dma_device_id": "system", 00:10:56.779 "dma_device_type": 1 00:10:56.779 }, 00:10:56.779 { 00:10:56.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.779 "dma_device_type": 2 00:10:56.779 } 00:10:56.779 ], 00:10:56.779 "driver_specific": {} 00:10:56.779 } 00:10:56.779 ]' 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:56.779 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.713 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.713 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:57.713 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.713 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:57.713 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:59.615 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:59.615 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:59.615 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.615 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:59.615 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.615 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:59.874 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:59.874 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:59.874 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:59.874 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:59.874 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:59.874 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:59.874 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:59.874 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:59.874 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:59.874 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:59.874 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:00.132 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:00.698 16:24:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.635 ************************************ 00:11:01.635 START TEST filesystem_ext4 00:11:01.635 ************************************ 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:01.635 16:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:01.635 mke2fs 1.47.0 (5-Feb-2023) 00:11:01.635 Discarding device blocks: 0/522240 done 00:11:01.635 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:01.635 Filesystem UUID: 2c4a5a2e-b79c-4556-a828-c55174c211a0 00:11:01.635 Superblock backups stored on blocks: 00:11:01.635 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:01.635 00:11:01.635 Allocating group tables: 0/64 done 00:11:01.635 Writing inode tables: 0/64 done 00:11:03.537 Creating journal (8192 blocks): done 00:11:04.731 Writing superblocks and filesystem accounting information: 0/64 done 00:11:04.731 00:11:04.731 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:04.731 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 871976 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:11.380 00:11:11.380 real 0m9.141s 00:11:11.380 user 0m0.026s 00:11:11.380 sys 0m0.078s 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:11.380 ************************************ 00:11:11.380 END TEST filesystem_ext4 00:11:11.380 ************************************ 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.380 ************************************ 00:11:11.380 START TEST filesystem_btrfs 00:11:11.380 ************************************ 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:11.380 btrfs-progs v6.8.1 00:11:11.380 See https://btrfs.readthedocs.io for more information. 00:11:11.380 00:11:11.380 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:11.380 NOTE: several default settings have changed in version 5.15, please make sure 00:11:11.380 this does not affect your deployments: 00:11:11.380 - DUP for metadata (-m dup) 00:11:11.380 - enabled no-holes (-O no-holes) 00:11:11.380 - enabled free-space-tree (-R free-space-tree) 00:11:11.380 00:11:11.380 Label: (null) 00:11:11.380 UUID: aa6a4c74-6cc8-401c-bc98-acac3a2e1669 00:11:11.380 Node size: 16384 00:11:11.380 Sector size: 4096 (CPU page size: 4096) 00:11:11.380 Filesystem size: 510.00MiB 00:11:11.380 Block group profiles: 00:11:11.380 Data: single 8.00MiB 00:11:11.380 Metadata: DUP 32.00MiB 00:11:11.380 System: DUP 8.00MiB 00:11:11.380 SSD detected: yes 00:11:11.380 Zoned device: no 00:11:11.380 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:11.380 Checksum: crc32c 00:11:11.380 Number of devices: 1 00:11:11.380 Devices: 00:11:11.380 ID SIZE PATH 00:11:11.380 1 510.00MiB /dev/nvme0n1p1 00:11:11.380 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:11.380 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:11.699 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:11.699 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 871976 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:11.700 00:11:11.700 real 0m0.917s 00:11:11.700 user 0m0.026s 00:11:11.700 sys 0m0.114s 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:11.700 ************************************ 00:11:11.700 END TEST filesystem_btrfs 00:11:11.700 ************************************ 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.700 ************************************ 00:11:11.700 START TEST filesystem_xfs 00:11:11.700 ************************************ 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:11.700 16:24:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:11.958 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:11.958 = sectsz=512 attr=2, projid32bit=1 00:11:11.959 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:11.959 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:11.959 data = bsize=4096 blocks=130560, imaxpct=25 00:11:11.959 = sunit=0 swidth=0 blks 00:11:11.959 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:11.959 log =internal log bsize=4096 blocks=16384, version=2 00:11:11.959 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:11.959 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:12.893 Discarding blocks...Done. 00:11:12.893 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:12.893 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.798 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.798 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:14.798 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.798 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:14.798 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:14.798 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.798 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 871976 00:11:14.798 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.798 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.056 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.056 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.056 00:11:15.056 real 0m3.134s 00:11:15.056 user 0m0.024s 00:11:15.056 sys 0m0.075s 00:11:15.056 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.056 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:15.056 ************************************ 00:11:15.056 END TEST filesystem_xfs 00:11:15.056 ************************************ 00:11:15.056 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:15.314 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:15.314 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.314 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.314 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.314 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.314 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.314 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:15.314 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 871976 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 871976 ']' 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 871976 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 871976 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 871976' 00:11:15.315 killing process with pid 871976 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 871976 00:11:15.315 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 871976 00:11:15.573 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:15.573 00:11:15.573 real 0m19.620s 00:11:15.573 user 1m17.357s 00:11:15.573 sys 0m1.447s 00:11:15.573 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.573 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.573 ************************************ 00:11:15.573 END TEST nvmf_filesystem_no_in_capsule 00:11:15.573 ************************************ 00:11:15.573 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:15.574 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.574 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.574 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:15.841 ************************************ 00:11:15.842 START TEST nvmf_filesystem_in_capsule 00:11:15.842 ************************************ 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=875474 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 875474 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 875474 ']' 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.842 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.842 [2024-12-14 16:24:45.752415] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:15.842 [2024-12-14 16:24:45.752457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.842 [2024-12-14 16:24:45.830534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.842 [2024-12-14 16:24:45.854119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.842 [2024-12-14 16:24:45.854155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.842 [2024-12-14 16:24:45.854163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.842 [2024-12-14 16:24:45.854170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.842 [2024-12-14 16:24:45.854177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.842 [2024-12-14 16:24:45.855657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.842 [2024-12-14 16:24:45.856164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.842 [2024-12-14 16:24:45.856249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.842 [2024-12-14 16:24:45.856250] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.102 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.102 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:16.102 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.102 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.102 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.102 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.102 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:16.102 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:16.102 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.102 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.102 [2024-12-14 16:24:45.996548] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.102 Malloc1 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.102 [2024-12-14 16:24:46.148705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:16.102 { 00:11:16.102 "name": "Malloc1", 00:11:16.102 "aliases": [ 00:11:16.102 "444901a2-fcba-4065-8acc-4198f915fdf2" 00:11:16.102 ], 00:11:16.102 "product_name": "Malloc disk", 00:11:16.102 "block_size": 512, 00:11:16.102 "num_blocks": 1048576, 00:11:16.102 "uuid": "444901a2-fcba-4065-8acc-4198f915fdf2", 00:11:16.102 "assigned_rate_limits": { 00:11:16.102 "rw_ios_per_sec": 0, 00:11:16.102 "rw_mbytes_per_sec": 0, 00:11:16.102 "r_mbytes_per_sec": 0, 00:11:16.102 "w_mbytes_per_sec": 0 00:11:16.102 }, 00:11:16.102 "claimed": true, 00:11:16.102 "claim_type": "exclusive_write", 00:11:16.102 "zoned": false, 00:11:16.102 "supported_io_types": { 00:11:16.102 "read": true, 00:11:16.102 "write": true, 00:11:16.102 "unmap": true, 00:11:16.102 "flush": true, 00:11:16.102 "reset": true, 00:11:16.102 "nvme_admin": false, 00:11:16.102 "nvme_io": false, 00:11:16.102 "nvme_io_md": false, 00:11:16.102 "write_zeroes": true, 00:11:16.102 "zcopy": true, 00:11:16.102 "get_zone_info": false, 00:11:16.102 "zone_management": false, 00:11:16.102 "zone_append": false, 00:11:16.102 "compare": false, 00:11:16.102 "compare_and_write": false, 00:11:16.102 "abort": true, 00:11:16.102 "seek_hole": false, 00:11:16.102 "seek_data": false, 00:11:16.102 "copy": true, 00:11:16.102 "nvme_iov_md": false 00:11:16.102 }, 00:11:16.102 "memory_domains": [ 00:11:16.102 { 00:11:16.102 "dma_device_id": "system", 00:11:16.102 "dma_device_type": 1 00:11:16.102 }, 00:11:16.102 { 00:11:16.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.102 "dma_device_type": 2 00:11:16.102 } 00:11:16.102 ], 00:11:16.102 "driver_specific": {} 00:11:16.102 } 00:11:16.102 ]' 00:11:16.102 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:16.360 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:16.360 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:16.360 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:16.360 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:16.360 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:16.360 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:16.360 16:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.294 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:17.294 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:17.294 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.294 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:17.294 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:19.824 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:19.824 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:19.824 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.824 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:19.825 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:20.083 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.458 ************************************ 00:11:21.458 START TEST filesystem_in_capsule_ext4 00:11:21.458 ************************************ 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:21.458 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:21.458 mke2fs 1.47.0 (5-Feb-2023) 00:11:21.458 Discarding device blocks: 0/522240 done 00:11:21.458 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:21.458 Filesystem UUID: 30612a6f-718c-41ef-a7ee-8935e2b6ed93 00:11:21.458 Superblock backups stored on blocks: 00:11:21.458 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:21.458 00:11:21.458 Allocating group tables: 0/64 done 00:11:21.458 Writing inode tables: 0/64 done 00:11:21.717 Creating journal (8192 blocks): done 00:11:22.909 Writing superblocks and filesystem accounting information: 0/64 done 00:11:22.909 00:11:22.909 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:22.909 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 875474 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:29.469 00:11:29.469 real 0m7.282s 00:11:29.469 user 0m0.029s 00:11:29.469 sys 0m0.069s 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:29.469 ************************************ 00:11:29.469 END TEST filesystem_in_capsule_ext4 00:11:29.469 ************************************ 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.469 ************************************ 00:11:29.469 START TEST filesystem_in_capsule_btrfs 00:11:29.469 ************************************ 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:29.469 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:29.469 btrfs-progs v6.8.1 00:11:29.469 See https://btrfs.readthedocs.io for more information. 00:11:29.469 00:11:29.469 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:29.469 NOTE: several default settings have changed in version 5.15, please make sure 00:11:29.469 this does not affect your deployments: 00:11:29.469 - DUP for metadata (-m dup) 00:11:29.469 - enabled no-holes (-O no-holes) 00:11:29.469 - enabled free-space-tree (-R free-space-tree) 00:11:29.469 00:11:29.469 Label: (null) 00:11:29.469 UUID: 31af3511-863a-4ae7-bc73-8775d43e3269 00:11:29.469 Node size: 16384 00:11:29.469 Sector size: 4096 (CPU page size: 4096) 00:11:29.469 Filesystem size: 510.00MiB 00:11:29.469 Block group profiles: 00:11:29.469 Data: single 8.00MiB 00:11:29.469 Metadata: DUP 32.00MiB 00:11:29.469 System: DUP 8.00MiB 00:11:29.469 SSD detected: yes 00:11:29.469 Zoned device: no 00:11:29.470 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:29.470 Checksum: crc32c 00:11:29.470 Number of devices: 1 00:11:29.470 Devices: 00:11:29.470 ID SIZE PATH 00:11:29.470 1 510.00MiB /dev/nvme0n1p1 00:11:29.470 00:11:29.470 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:29.470 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 875474 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:29.470 00:11:29.470 real 0m0.913s 00:11:29.470 user 0m0.028s 00:11:29.470 sys 0m0.115s 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:29.470 ************************************ 00:11:29.470 END TEST filesystem_in_capsule_btrfs 00:11:29.470 ************************************ 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.470 ************************************ 00:11:29.470 START TEST filesystem_in_capsule_xfs 00:11:29.470 ************************************ 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:29.470 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:29.729 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:29.729 = sectsz=512 attr=2, projid32bit=1 00:11:29.729 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:29.729 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:29.729 data = bsize=4096 blocks=130560, imaxpct=25 00:11:29.729 = sunit=0 swidth=0 blks 00:11:29.729 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:29.729 log =internal log bsize=4096 blocks=16384, version=2 00:11:29.729 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:29.729 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:30.663 Discarding blocks...Done. 00:11:30.663 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:30.663 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.564 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.564 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:32.564 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.564 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:32.564 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:32.564 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.564 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 875474 00:11:32.564 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.564 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.564 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.564 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.564 00:11:32.564 real 0m2.960s 00:11:32.564 user 0m0.021s 00:11:32.564 sys 0m0.079s 00:11:32.565 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.565 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.565 ************************************ 00:11:32.565 END TEST filesystem_in_capsule_xfs 00:11:32.565 ************************************ 00:11:32.565 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:32.823 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:32.823 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.823 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.823 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:32.823 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:32.823 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 875474 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 875474 ']' 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 875474 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 875474 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 875474' 00:11:33.082 killing process with pid 875474 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 875474 00:11:33.082 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 875474 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:33.342 00:11:33.342 real 0m17.613s 00:11:33.342 user 1m9.413s 00:11:33.342 sys 0m1.394s 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.342 ************************************ 00:11:33.342 END TEST nvmf_filesystem_in_capsule 00:11:33.342 ************************************ 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.342 rmmod nvme_tcp 00:11:33.342 rmmod nvme_fabrics 00:11:33.342 rmmod nvme_keyring 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.342 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.875 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.875 00:11:35.875 real 0m45.882s 00:11:35.875 user 2m28.859s 00:11:35.875 sys 0m7.426s 00:11:35.875 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.875 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.875 ************************************ 00:11:35.875 END TEST nvmf_filesystem 00:11:35.875 ************************************ 00:11:35.875 16:25:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:35.875 16:25:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.875 16:25:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.875 16:25:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.875 ************************************ 00:11:35.875 START TEST nvmf_target_discovery 00:11:35.875 ************************************ 00:11:35.875 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:35.875 * Looking for test storage... 00:11:35.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.875 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.875 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.875 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:35.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.876 --rc genhtml_branch_coverage=1 00:11:35.876 --rc genhtml_function_coverage=1 00:11:35.876 --rc genhtml_legend=1 00:11:35.876 --rc geninfo_all_blocks=1 00:11:35.876 --rc geninfo_unexecuted_blocks=1 00:11:35.876 00:11:35.876 ' 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:35.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.876 --rc genhtml_branch_coverage=1 00:11:35.876 --rc genhtml_function_coverage=1 00:11:35.876 --rc genhtml_legend=1 00:11:35.876 --rc geninfo_all_blocks=1 00:11:35.876 --rc geninfo_unexecuted_blocks=1 00:11:35.876 00:11:35.876 ' 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:35.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.876 --rc genhtml_branch_coverage=1 00:11:35.876 --rc genhtml_function_coverage=1 00:11:35.876 --rc genhtml_legend=1 00:11:35.876 --rc geninfo_all_blocks=1 00:11:35.876 --rc geninfo_unexecuted_blocks=1 00:11:35.876 00:11:35.876 ' 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:35.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.876 --rc genhtml_branch_coverage=1 00:11:35.876 --rc genhtml_function_coverage=1 00:11:35.876 --rc genhtml_legend=1 00:11:35.876 --rc geninfo_all_blocks=1 00:11:35.876 --rc geninfo_unexecuted_blocks=1 00:11:35.876 00:11:35.876 ' 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:35.876 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.877 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:42.450 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:42.450 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:42.450 Found net devices under 0000:af:00.0: cvl_0_0 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.450 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:42.451 Found net devices under 0000:af:00.1: cvl_0_1 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:11:42.451 00:11:42.451 --- 10.0.0.2 ping statistics --- 00:11:42.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.451 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:11:42.451 00:11:42.451 --- 10.0.0.1 ping statistics --- 00:11:42.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.451 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=881958 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 881958 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 881958 ']' 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.451 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.451 [2024-12-14 16:25:11.907410] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:42.451 [2024-12-14 16:25:11.907459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.451 [2024-12-14 16:25:11.985253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.451 [2024-12-14 16:25:12.008905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.451 [2024-12-14 16:25:12.008945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.451 [2024-12-14 16:25:12.008952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.451 [2024-12-14 16:25:12.008959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.451 [2024-12-14 16:25:12.008964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.451 [2024-12-14 16:25:12.010280] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.451 [2024-12-14 16:25:12.010387] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.451 [2024-12-14 16:25:12.010493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.451 [2024-12-14 16:25:12.010494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.451 [2024-12-14 16:25:12.143243] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.451 Null1 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.451 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 [2024-12-14 16:25:12.195680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 Null2 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 Null3 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 Null4 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.452 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:42.452 00:11:42.452 Discovery Log Number of Records 6, Generation counter 6 00:11:42.452 =====Discovery Log Entry 0====== 00:11:42.452 trtype: tcp 00:11:42.452 adrfam: ipv4 00:11:42.452 subtype: current discovery subsystem 00:11:42.452 treq: not required 00:11:42.452 portid: 0 00:11:42.452 trsvcid: 4420 00:11:42.452 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:42.452 traddr: 10.0.0.2 00:11:42.452 eflags: explicit discovery connections, duplicate discovery information 00:11:42.452 sectype: none 00:11:42.452 =====Discovery Log Entry 1====== 00:11:42.452 trtype: tcp 00:11:42.452 adrfam: ipv4 00:11:42.452 subtype: nvme subsystem 00:11:42.452 treq: not required 00:11:42.452 portid: 0 00:11:42.452 trsvcid: 4420 00:11:42.452 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:42.452 traddr: 10.0.0.2 00:11:42.452 eflags: none 00:11:42.452 sectype: none 00:11:42.452 =====Discovery Log Entry 2====== 00:11:42.452 trtype: tcp 00:11:42.452 adrfam: ipv4 00:11:42.452 subtype: nvme subsystem 00:11:42.452 treq: not required 00:11:42.452 portid: 0 00:11:42.452 trsvcid: 4420 00:11:42.452 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:42.452 traddr: 10.0.0.2 00:11:42.452 eflags: none 00:11:42.452 sectype: none 00:11:42.452 =====Discovery Log Entry 3====== 00:11:42.452 trtype: tcp 00:11:42.452 adrfam: ipv4 00:11:42.452 subtype: nvme subsystem 00:11:42.452 treq: not required 00:11:42.452 portid: 0 00:11:42.452 trsvcid: 4420 00:11:42.452 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:42.452 traddr: 10.0.0.2 00:11:42.452 eflags: none 00:11:42.452 sectype: none 00:11:42.452 =====Discovery Log Entry 4====== 00:11:42.452 trtype: tcp 00:11:42.452 adrfam: ipv4 00:11:42.452 subtype: nvme subsystem 00:11:42.452 treq: not required 00:11:42.452 portid: 0 00:11:42.452 trsvcid: 4420 00:11:42.452 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:42.452 traddr: 10.0.0.2 00:11:42.452 eflags: none 00:11:42.452 sectype: none 00:11:42.452 =====Discovery Log Entry 5====== 00:11:42.452 trtype: tcp 00:11:42.452 adrfam: ipv4 00:11:42.452 subtype: discovery subsystem referral 00:11:42.452 treq: not required 00:11:42.452 portid: 0 00:11:42.452 trsvcid: 4430 00:11:42.453 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:42.453 traddr: 10.0.0.2 00:11:42.453 eflags: none 00:11:42.453 sectype: none 00:11:42.453 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:42.453 Perform nvmf subsystem discovery via RPC 00:11:42.453 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:42.453 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.453 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.712 [ 00:11:42.712 { 00:11:42.712 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:42.712 "subtype": "Discovery", 00:11:42.712 "listen_addresses": [ 00:11:42.712 { 00:11:42.712 "trtype": "TCP", 00:11:42.712 "adrfam": "IPv4", 00:11:42.712 "traddr": "10.0.0.2", 00:11:42.712 "trsvcid": "4420" 00:11:42.712 } 00:11:42.712 ], 00:11:42.712 "allow_any_host": true, 00:11:42.712 "hosts": [] 00:11:42.712 }, 00:11:42.712 { 00:11:42.712 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.712 "subtype": "NVMe", 00:11:42.712 "listen_addresses": [ 00:11:42.712 { 00:11:42.712 "trtype": "TCP", 00:11:42.712 "adrfam": "IPv4", 00:11:42.712 "traddr": "10.0.0.2", 00:11:42.712 "trsvcid": "4420" 00:11:42.712 } 00:11:42.712 ], 00:11:42.712 "allow_any_host": true, 00:11:42.712 "hosts": [], 00:11:42.712 "serial_number": "SPDK00000000000001", 00:11:42.712 "model_number": "SPDK bdev Controller", 00:11:42.712 "max_namespaces": 32, 00:11:42.712 "min_cntlid": 1, 00:11:42.712 "max_cntlid": 65519, 00:11:42.712 "namespaces": [ 00:11:42.712 { 00:11:42.712 "nsid": 1, 00:11:42.712 "bdev_name": "Null1", 00:11:42.712 "name": "Null1", 00:11:42.712 "nguid": "52681FE3563B44C098BD322AA724C3B6", 00:11:42.712 "uuid": "52681fe3-563b-44c0-98bd-322aa724c3b6" 00:11:42.712 } 00:11:42.712 ] 00:11:42.712 }, 00:11:42.712 { 00:11:42.713 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:42.713 "subtype": "NVMe", 00:11:42.713 "listen_addresses": [ 00:11:42.713 { 00:11:42.713 "trtype": "TCP", 00:11:42.713 "adrfam": "IPv4", 00:11:42.713 "traddr": "10.0.0.2", 00:11:42.713 "trsvcid": "4420" 00:11:42.713 } 00:11:42.713 ], 00:11:42.713 "allow_any_host": true, 00:11:42.713 "hosts": [], 00:11:42.713 "serial_number": "SPDK00000000000002", 00:11:42.713 "model_number": "SPDK bdev Controller", 00:11:42.713 "max_namespaces": 32, 00:11:42.713 "min_cntlid": 1, 00:11:42.713 "max_cntlid": 65519, 00:11:42.713 "namespaces": [ 00:11:42.713 { 00:11:42.713 "nsid": 1, 00:11:42.713 "bdev_name": "Null2", 00:11:42.713 "name": "Null2", 00:11:42.713 "nguid": "E170E29DF3884AD4BFBE19AD39540DF8", 00:11:42.713 "uuid": "e170e29d-f388-4ad4-bfbe-19ad39540df8" 00:11:42.713 } 00:11:42.713 ] 00:11:42.713 }, 00:11:42.713 { 00:11:42.713 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:42.713 "subtype": "NVMe", 00:11:42.713 "listen_addresses": [ 00:11:42.713 { 00:11:42.713 "trtype": "TCP", 00:11:42.713 "adrfam": "IPv4", 00:11:42.713 "traddr": "10.0.0.2", 00:11:42.713 "trsvcid": "4420" 00:11:42.713 } 00:11:42.713 ], 00:11:42.713 "allow_any_host": true, 00:11:42.713 "hosts": [], 00:11:42.713 "serial_number": "SPDK00000000000003", 00:11:42.713 "model_number": "SPDK bdev Controller", 00:11:42.713 "max_namespaces": 32, 00:11:42.713 "min_cntlid": 1, 00:11:42.713 "max_cntlid": 65519, 00:11:42.713 "namespaces": [ 00:11:42.713 { 00:11:42.713 "nsid": 1, 00:11:42.713 "bdev_name": "Null3", 00:11:42.713 "name": "Null3", 00:11:42.713 "nguid": "E7ACC7150D564A94AA06CA5C0F92CC0C", 00:11:42.713 "uuid": "e7acc715-0d56-4a94-aa06-ca5c0f92cc0c" 00:11:42.713 } 00:11:42.713 ] 00:11:42.713 }, 00:11:42.713 { 00:11:42.713 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:42.713 "subtype": "NVMe", 00:11:42.713 "listen_addresses": [ 00:11:42.713 { 00:11:42.713 "trtype": "TCP", 00:11:42.713 "adrfam": "IPv4", 00:11:42.713 "traddr": "10.0.0.2", 00:11:42.713 "trsvcid": "4420" 00:11:42.713 } 00:11:42.713 ], 00:11:42.713 "allow_any_host": true, 00:11:42.713 "hosts": [], 00:11:42.713 "serial_number": "SPDK00000000000004", 00:11:42.713 "model_number": "SPDK bdev Controller", 00:11:42.713 "max_namespaces": 32, 00:11:42.713 "min_cntlid": 1, 00:11:42.713 "max_cntlid": 65519, 00:11:42.713 "namespaces": [ 00:11:42.713 { 00:11:42.713 "nsid": 1, 00:11:42.713 "bdev_name": "Null4", 00:11:42.713 "name": "Null4", 00:11:42.713 "nguid": "C414823852EC47A6A6BDA15F9077168A", 00:11:42.713 "uuid": "c4148238-52ec-47a6-a6bd-a15f9077168a" 00:11:42.713 } 00:11:42.713 ] 00:11:42.713 } 00:11:42.713 ] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.713 rmmod nvme_tcp 00:11:42.713 rmmod nvme_fabrics 00:11:42.713 rmmod nvme_keyring 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 881958 ']' 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 881958 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 881958 ']' 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 881958 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 881958 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 881958' 00:11:42.713 killing process with pid 881958 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 881958 00:11:42.713 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 881958 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.973 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.509 00:11:45.509 real 0m9.461s 00:11:45.509 user 0m5.581s 00:11:45.509 sys 0m4.865s 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.509 ************************************ 00:11:45.509 END TEST nvmf_target_discovery 00:11:45.509 ************************************ 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.509 ************************************ 00:11:45.509 START TEST nvmf_referrals 00:11:45.509 ************************************ 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:45.509 * Looking for test storage... 00:11:45.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:45.509 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:45.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.510 --rc genhtml_branch_coverage=1 00:11:45.510 --rc genhtml_function_coverage=1 00:11:45.510 --rc genhtml_legend=1 00:11:45.510 --rc geninfo_all_blocks=1 00:11:45.510 --rc geninfo_unexecuted_blocks=1 00:11:45.510 00:11:45.510 ' 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:45.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.510 --rc genhtml_branch_coverage=1 00:11:45.510 --rc genhtml_function_coverage=1 00:11:45.510 --rc genhtml_legend=1 00:11:45.510 --rc geninfo_all_blocks=1 00:11:45.510 --rc geninfo_unexecuted_blocks=1 00:11:45.510 00:11:45.510 ' 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:45.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.510 --rc genhtml_branch_coverage=1 00:11:45.510 --rc genhtml_function_coverage=1 00:11:45.510 --rc genhtml_legend=1 00:11:45.510 --rc geninfo_all_blocks=1 00:11:45.510 --rc geninfo_unexecuted_blocks=1 00:11:45.510 00:11:45.510 ' 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:45.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.510 --rc genhtml_branch_coverage=1 00:11:45.510 --rc genhtml_function_coverage=1 00:11:45.510 --rc genhtml_legend=1 00:11:45.510 --rc geninfo_all_blocks=1 00:11:45.510 --rc geninfo_unexecuted_blocks=1 00:11:45.510 00:11:45.510 ' 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.510 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:52.083 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:52.083 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:52.083 Found net devices under 0000:af:00.0: cvl_0_0 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:52.083 Found net devices under 0000:af:00.1: cvl_0_1 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.083 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.083 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.083 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.083 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.083 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.083 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.083 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.083 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.083 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:11:52.083 00:11:52.083 --- 10.0.0.2 ping statistics --- 00:11:52.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.083 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:11:52.083 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:11:52.083 00:11:52.084 --- 10.0.0.1 ping statistics --- 00:11:52.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.084 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=885668 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 885668 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 885668 ']' 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 [2024-12-14 16:25:21.266070] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:52.084 [2024-12-14 16:25:21.266121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.084 [2024-12-14 16:25:21.345375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.084 [2024-12-14 16:25:21.368892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.084 [2024-12-14 16:25:21.368932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.084 [2024-12-14 16:25:21.368939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.084 [2024-12-14 16:25:21.368945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.084 [2024-12-14 16:25:21.368950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.084 [2024-12-14 16:25:21.370448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.084 [2024-12-14 16:25:21.370579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.084 [2024-12-14 16:25:21.370665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.084 [2024-12-14 16:25:21.370666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 [2024-12-14 16:25:21.510967] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 [2024-12-14 16:25:21.536714] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:52.084 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.085 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.085 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.085 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:52.085 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:52.085 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:52.085 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:52.085 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.085 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:52.085 16:25:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.343 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:52.601 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:52.601 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:52.601 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:52.601 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:52.601 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.601 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:52.859 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:53.117 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:53.117 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:53.117 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:53.117 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:53.117 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:53.117 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.117 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:53.117 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:53.117 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:53.117 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:53.117 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:53.117 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.117 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:53.375 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.633 rmmod nvme_tcp 00:11:53.633 rmmod nvme_fabrics 00:11:53.633 rmmod nvme_keyring 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 885668 ']' 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 885668 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 885668 ']' 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 885668 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885668 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885668' 00:11:53.633 killing process with pid 885668 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 885668 00:11:53.633 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 885668 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.892 16:25:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.427 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.427 00:11:56.427 real 0m10.862s 00:11:56.427 user 0m12.486s 00:11:56.427 sys 0m5.211s 00:11:56.427 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.427 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.427 ************************************ 00:11:56.427 END TEST nvmf_referrals 00:11:56.427 ************************************ 00:11:56.427 16:25:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:56.427 16:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.427 16:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.427 16:25:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.427 ************************************ 00:11:56.427 START TEST nvmf_connect_disconnect 00:11:56.427 ************************************ 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:56.427 * Looking for test storage... 00:11:56.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.427 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:56.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.427 --rc genhtml_branch_coverage=1 00:11:56.427 --rc genhtml_function_coverage=1 00:11:56.427 --rc genhtml_legend=1 00:11:56.427 --rc geninfo_all_blocks=1 00:11:56.427 --rc geninfo_unexecuted_blocks=1 00:11:56.428 00:11:56.428 ' 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:56.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.428 --rc genhtml_branch_coverage=1 00:11:56.428 --rc genhtml_function_coverage=1 00:11:56.428 --rc genhtml_legend=1 00:11:56.428 --rc geninfo_all_blocks=1 00:11:56.428 --rc geninfo_unexecuted_blocks=1 00:11:56.428 00:11:56.428 ' 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:56.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.428 --rc genhtml_branch_coverage=1 00:11:56.428 --rc genhtml_function_coverage=1 00:11:56.428 --rc genhtml_legend=1 00:11:56.428 --rc geninfo_all_blocks=1 00:11:56.428 --rc geninfo_unexecuted_blocks=1 00:11:56.428 00:11:56.428 ' 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:56.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.428 --rc genhtml_branch_coverage=1 00:11:56.428 --rc genhtml_function_coverage=1 00:11:56.428 --rc genhtml_legend=1 00:11:56.428 --rc geninfo_all_blocks=1 00:11:56.428 --rc geninfo_unexecuted_blocks=1 00:11:56.428 00:11:56.428 ' 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.428 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:02.998 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:02.998 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:02.998 Found net devices under 0000:af:00.0: cvl_0_0 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:02.998 Found net devices under 0000:af:00.1: cvl_0_1 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:02.998 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.999 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:02.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:12:02.999 00:12:02.999 --- 10.0.0.2 ping statistics --- 00:12:02.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.999 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:12:02.999 00:12:02.999 --- 10.0.0.1 ping statistics --- 00:12:02.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.999 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=889678 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 889678 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 889678 ']' 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.999 [2024-12-14 16:25:32.255396] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:02.999 [2024-12-14 16:25:32.255439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.999 [2024-12-14 16:25:32.331080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.999 [2024-12-14 16:25:32.353147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.999 [2024-12-14 16:25:32.353188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.999 [2024-12-14 16:25:32.353194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.999 [2024-12-14 16:25:32.353200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.999 [2024-12-14 16:25:32.353205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.999 [2024-12-14 16:25:32.354852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.999 [2024-12-14 16:25:32.354959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.999 [2024-12-14 16:25:32.355077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.999 [2024-12-14 16:25:32.355077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.999 [2024-12-14 16:25:32.499340] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.999 [2024-12-14 16:25:32.567194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:02.999 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:04.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.451 rmmod nvme_tcp 00:15:54.451 rmmod nvme_fabrics 00:15:54.451 rmmod nvme_keyring 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 889678 ']' 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 889678 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 889678 ']' 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 889678 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 889678 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.451 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 889678' 00:15:54.451 killing process with pid 889678 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 889678 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 889678 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.452 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.357 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:56.357 00:15:56.357 real 4m0.368s 00:15:56.357 user 15m17.754s 00:15:56.357 sys 0m25.155s 00:15:56.357 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.357 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:56.357 ************************************ 00:15:56.357 END TEST nvmf_connect_disconnect 00:15:56.357 ************************************ 00:15:56.358 16:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:56.358 16:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:56.358 16:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.358 16:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.617 ************************************ 00:15:56.617 START TEST nvmf_multitarget 00:15:56.617 ************************************ 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:56.617 * Looking for test storage... 00:15:56.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:56.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.617 --rc genhtml_branch_coverage=1 00:15:56.617 --rc genhtml_function_coverage=1 00:15:56.617 --rc genhtml_legend=1 00:15:56.617 --rc geninfo_all_blocks=1 00:15:56.617 --rc geninfo_unexecuted_blocks=1 00:15:56.617 00:15:56.617 ' 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:56.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.617 --rc genhtml_branch_coverage=1 00:15:56.617 --rc genhtml_function_coverage=1 00:15:56.617 --rc genhtml_legend=1 00:15:56.617 --rc geninfo_all_blocks=1 00:15:56.617 --rc geninfo_unexecuted_blocks=1 00:15:56.617 00:15:56.617 ' 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:56.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.617 --rc genhtml_branch_coverage=1 00:15:56.617 --rc genhtml_function_coverage=1 00:15:56.617 --rc genhtml_legend=1 00:15:56.617 --rc geninfo_all_blocks=1 00:15:56.617 --rc geninfo_unexecuted_blocks=1 00:15:56.617 00:15:56.617 ' 00:15:56.617 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:56.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.618 --rc genhtml_branch_coverage=1 00:15:56.618 --rc genhtml_function_coverage=1 00:15:56.618 --rc genhtml_legend=1 00:15:56.618 --rc geninfo_all_blocks=1 00:15:56.618 --rc geninfo_unexecuted_blocks=1 00:15:56.618 00:15:56.618 ' 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:56.618 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.194 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:03.195 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:03.195 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:03.195 Found net devices under 0000:af:00.0: cvl_0_0 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:03.195 Found net devices under 0000:af:00.1: cvl_0_1 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:03.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:16:03.195 00:16:03.195 --- 10.0.0.2 ping statistics --- 00:16:03.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.195 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:16:03.195 00:16:03.195 --- 10.0.0.1 ping statistics --- 00:16:03.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.195 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=932317 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 932317 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 932317 ']' 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.195 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.196 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:03.196 [2024-12-14 16:29:32.660054] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:03.196 [2024-12-14 16:29:32.660096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.196 [2024-12-14 16:29:32.738626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.196 [2024-12-14 16:29:32.762046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.196 [2024-12-14 16:29:32.762085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.196 [2024-12-14 16:29:32.762093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.196 [2024-12-14 16:29:32.762100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.196 [2024-12-14 16:29:32.762105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.196 [2024-12-14 16:29:32.763409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.196 [2024-12-14 16:29:32.763447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.196 [2024-12-14 16:29:32.763470] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.196 [2024-12-14 16:29:32.763470] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.196 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.196 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:03.196 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:03.196 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:03.196 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:03.196 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.196 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:03.196 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:03.196 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:03.196 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:03.196 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:03.196 "nvmf_tgt_1" 00:16:03.196 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:03.196 "nvmf_tgt_2" 00:16:03.196 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:03.196 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:03.454 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:03.454 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:03.454 true 00:16:03.454 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:03.454 true 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:03.713 rmmod nvme_tcp 00:16:03.713 rmmod nvme_fabrics 00:16:03.713 rmmod nvme_keyring 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 932317 ']' 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 932317 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 932317 ']' 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 932317 00:16:03.713 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:03.714 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.714 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 932317 00:16:03.714 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.714 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.714 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 932317' 00:16:03.714 killing process with pid 932317 00:16:03.714 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 932317 00:16:03.714 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 932317 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.973 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:06.509 00:16:06.509 real 0m9.545s 00:16:06.509 user 0m7.196s 00:16:06.509 sys 0m4.855s 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:06.509 ************************************ 00:16:06.509 END TEST nvmf_multitarget 00:16:06.509 ************************************ 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.509 ************************************ 00:16:06.509 START TEST nvmf_rpc 00:16:06.509 ************************************ 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:06.509 * Looking for test storage... 00:16:06.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:06.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.509 --rc genhtml_branch_coverage=1 00:16:06.509 --rc genhtml_function_coverage=1 00:16:06.509 --rc genhtml_legend=1 00:16:06.509 --rc geninfo_all_blocks=1 00:16:06.509 --rc geninfo_unexecuted_blocks=1 00:16:06.509 00:16:06.509 ' 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:06.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.509 --rc genhtml_branch_coverage=1 00:16:06.509 --rc genhtml_function_coverage=1 00:16:06.509 --rc genhtml_legend=1 00:16:06.509 --rc geninfo_all_blocks=1 00:16:06.509 --rc geninfo_unexecuted_blocks=1 00:16:06.509 00:16:06.509 ' 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:06.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.509 --rc genhtml_branch_coverage=1 00:16:06.509 --rc genhtml_function_coverage=1 00:16:06.509 --rc genhtml_legend=1 00:16:06.509 --rc geninfo_all_blocks=1 00:16:06.509 --rc geninfo_unexecuted_blocks=1 00:16:06.509 00:16:06.509 ' 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:06.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.509 --rc genhtml_branch_coverage=1 00:16:06.509 --rc genhtml_function_coverage=1 00:16:06.509 --rc genhtml_legend=1 00:16:06.509 --rc geninfo_all_blocks=1 00:16:06.509 --rc geninfo_unexecuted_blocks=1 00:16:06.509 00:16:06.509 ' 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.509 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:06.510 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:13.083 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:13.083 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:13.083 Found net devices under 0000:af:00.0: cvl_0_0 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:13.083 Found net devices under 0000:af:00.1: cvl_0_1 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:13.083 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:13.083 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:13.083 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:13.083 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:13.083 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:13.083 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:13.083 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:13.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:16:13.084 00:16:13.084 --- 10.0.0.2 ping statistics --- 00:16:13.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.084 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:13.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:16:13.084 00:16:13.084 --- 10.0.0.1 ping statistics --- 00:16:13.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.084 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=936051 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 936051 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 936051 ']' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.084 [2024-12-14 16:29:42.264029] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:13.084 [2024-12-14 16:29:42.264080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.084 [2024-12-14 16:29:42.340781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.084 [2024-12-14 16:29:42.363511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.084 [2024-12-14 16:29:42.363550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.084 [2024-12-14 16:29:42.363560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.084 [2024-12-14 16:29:42.363566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.084 [2024-12-14 16:29:42.363571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.084 [2024-12-14 16:29:42.365066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.084 [2024-12-14 16:29:42.365104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.084 [2024-12-14 16:29:42.365212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.084 [2024-12-14 16:29:42.365213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:13.084 "tick_rate": 2100000000, 00:16:13.084 "poll_groups": [ 00:16:13.084 { 00:16:13.084 "name": "nvmf_tgt_poll_group_000", 00:16:13.084 "admin_qpairs": 0, 00:16:13.084 "io_qpairs": 0, 00:16:13.084 "current_admin_qpairs": 0, 00:16:13.084 "current_io_qpairs": 0, 00:16:13.084 "pending_bdev_io": 0, 00:16:13.084 "completed_nvme_io": 0, 00:16:13.084 "transports": [] 00:16:13.084 }, 00:16:13.084 { 00:16:13.084 "name": "nvmf_tgt_poll_group_001", 00:16:13.084 "admin_qpairs": 0, 00:16:13.084 "io_qpairs": 0, 00:16:13.084 "current_admin_qpairs": 0, 00:16:13.084 "current_io_qpairs": 0, 00:16:13.084 "pending_bdev_io": 0, 00:16:13.084 "completed_nvme_io": 0, 00:16:13.084 "transports": [] 00:16:13.084 }, 00:16:13.084 { 00:16:13.084 "name": "nvmf_tgt_poll_group_002", 00:16:13.084 "admin_qpairs": 0, 00:16:13.084 "io_qpairs": 0, 00:16:13.084 "current_admin_qpairs": 0, 00:16:13.084 "current_io_qpairs": 0, 00:16:13.084 "pending_bdev_io": 0, 00:16:13.084 "completed_nvme_io": 0, 00:16:13.084 "transports": [] 00:16:13.084 }, 00:16:13.084 { 00:16:13.084 "name": "nvmf_tgt_poll_group_003", 00:16:13.084 "admin_qpairs": 0, 00:16:13.084 "io_qpairs": 0, 00:16:13.084 "current_admin_qpairs": 0, 00:16:13.084 "current_io_qpairs": 0, 00:16:13.084 "pending_bdev_io": 0, 00:16:13.084 "completed_nvme_io": 0, 00:16:13.084 "transports": [] 00:16:13.084 } 00:16:13.084 ] 00:16:13.084 }' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.084 [2024-12-14 16:29:42.618229] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:13.084 "tick_rate": 2100000000, 00:16:13.084 "poll_groups": [ 00:16:13.084 { 00:16:13.084 "name": "nvmf_tgt_poll_group_000", 00:16:13.084 "admin_qpairs": 0, 00:16:13.084 "io_qpairs": 0, 00:16:13.084 "current_admin_qpairs": 0, 00:16:13.084 "current_io_qpairs": 0, 00:16:13.084 "pending_bdev_io": 0, 00:16:13.084 "completed_nvme_io": 0, 00:16:13.084 "transports": [ 00:16:13.084 { 00:16:13.084 "trtype": "TCP" 00:16:13.084 } 00:16:13.084 ] 00:16:13.084 }, 00:16:13.084 { 00:16:13.084 "name": "nvmf_tgt_poll_group_001", 00:16:13.084 "admin_qpairs": 0, 00:16:13.084 "io_qpairs": 0, 00:16:13.084 "current_admin_qpairs": 0, 00:16:13.084 "current_io_qpairs": 0, 00:16:13.084 "pending_bdev_io": 0, 00:16:13.084 "completed_nvme_io": 0, 00:16:13.084 "transports": [ 00:16:13.084 { 00:16:13.084 "trtype": "TCP" 00:16:13.084 } 00:16:13.084 ] 00:16:13.084 }, 00:16:13.084 { 00:16:13.084 "name": "nvmf_tgt_poll_group_002", 00:16:13.084 "admin_qpairs": 0, 00:16:13.084 "io_qpairs": 0, 00:16:13.084 "current_admin_qpairs": 0, 00:16:13.084 "current_io_qpairs": 0, 00:16:13.084 "pending_bdev_io": 0, 00:16:13.084 "completed_nvme_io": 0, 00:16:13.084 "transports": [ 00:16:13.084 { 00:16:13.084 "trtype": "TCP" 00:16:13.084 } 00:16:13.084 ] 00:16:13.084 }, 00:16:13.084 { 00:16:13.084 "name": "nvmf_tgt_poll_group_003", 00:16:13.084 "admin_qpairs": 0, 00:16:13.084 "io_qpairs": 0, 00:16:13.084 "current_admin_qpairs": 0, 00:16:13.084 "current_io_qpairs": 0, 00:16:13.084 "pending_bdev_io": 0, 00:16:13.084 "completed_nvme_io": 0, 00:16:13.084 "transports": [ 00:16:13.084 { 00:16:13.084 "trtype": "TCP" 00:16:13.084 } 00:16:13.084 ] 00:16:13.084 } 00:16:13.084 ] 00:16:13.084 }' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:13.084 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.085 Malloc1 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.085 [2024-12-14 16:29:42.806472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:13.085 [2024-12-14 16:29:42.835046] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:13.085 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:13.085 could not add new controller: failed to write to nvme-fabrics device 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.085 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.021 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:14.021 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:14.021 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.021 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:14.021 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:16.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.557 [2024-12-14 16:29:46.188581] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:16.557 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:16.557 could not add new controller: failed to write to nvme-fabrics device 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.557 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.603 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:17.603 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:17.603 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.603 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:17.603 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.517 [2024-12-14 16:29:49.482440] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.517 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.893 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:20.893 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:20.893 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.893 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:20.893 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:22.797 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:22.797 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:22.797 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.797 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.798 [2024-12-14 16:29:52.871491] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.798 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.056 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.056 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:23.056 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.056 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.056 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.056 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:23.992 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:23.992 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:23.992 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:23.992 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:23.992 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:26.525 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:26.525 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:26.525 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:26.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.525 [2024-12-14 16:29:56.272950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.525 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:27.462 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:27.462 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:27.462 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.462 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:27.462 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:29.365 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:29.365 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:29.365 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.365 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:29.365 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.365 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:29.365 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:29.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.624 [2024-12-14 16:29:59.535016] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.624 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:30.560 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:30.560 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:30.560 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:30.560 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:30.560 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:33.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.094 [2024-12-14 16:30:02.830109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.094 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.030 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:34.030 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:34.030 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.030 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:34.030 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:36.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 [2024-12-14 16:30:06.190830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.571 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.571 [2024-12-14 16:30:06.238925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 [2024-12-14 16:30:06.287076] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 [2024-12-14 16:30:06.335237] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 [2024-12-14 16:30:06.387393] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.572 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:36.572 "tick_rate": 2100000000, 00:16:36.572 "poll_groups": [ 00:16:36.572 { 00:16:36.572 "name": "nvmf_tgt_poll_group_000", 00:16:36.572 "admin_qpairs": 2, 00:16:36.572 "io_qpairs": 168, 00:16:36.572 "current_admin_qpairs": 0, 00:16:36.572 "current_io_qpairs": 0, 00:16:36.572 "pending_bdev_io": 0, 00:16:36.572 "completed_nvme_io": 270, 00:16:36.572 "transports": [ 00:16:36.572 { 00:16:36.572 "trtype": "TCP" 00:16:36.572 } 00:16:36.572 ] 00:16:36.572 }, 00:16:36.572 { 00:16:36.572 "name": "nvmf_tgt_poll_group_001", 00:16:36.572 "admin_qpairs": 2, 00:16:36.572 "io_qpairs": 168, 00:16:36.572 "current_admin_qpairs": 0, 00:16:36.572 "current_io_qpairs": 0, 00:16:36.572 "pending_bdev_io": 0, 00:16:36.572 "completed_nvme_io": 316, 00:16:36.572 "transports": [ 00:16:36.572 { 00:16:36.572 "trtype": "TCP" 00:16:36.572 } 00:16:36.572 ] 00:16:36.572 }, 00:16:36.572 { 00:16:36.572 "name": "nvmf_tgt_poll_group_002", 00:16:36.572 "admin_qpairs": 1, 00:16:36.572 "io_qpairs": 168, 00:16:36.572 "current_admin_qpairs": 0, 00:16:36.572 "current_io_qpairs": 0, 00:16:36.572 "pending_bdev_io": 0, 00:16:36.572 "completed_nvme_io": 218, 00:16:36.572 "transports": [ 00:16:36.572 { 00:16:36.573 "trtype": "TCP" 00:16:36.573 } 00:16:36.573 ] 00:16:36.573 }, 00:16:36.573 { 00:16:36.573 "name": "nvmf_tgt_poll_group_003", 00:16:36.573 "admin_qpairs": 2, 00:16:36.573 "io_qpairs": 168, 00:16:36.573 "current_admin_qpairs": 0, 00:16:36.573 "current_io_qpairs": 0, 00:16:36.573 "pending_bdev_io": 0, 00:16:36.573 "completed_nvme_io": 218, 00:16:36.573 "transports": [ 00:16:36.573 { 00:16:36.573 "trtype": "TCP" 00:16:36.573 } 00:16:36.573 ] 00:16:36.573 } 00:16:36.573 ] 00:16:36.573 }' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:36.573 rmmod nvme_tcp 00:16:36.573 rmmod nvme_fabrics 00:16:36.573 rmmod nvme_keyring 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 936051 ']' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 936051 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 936051 ']' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 936051 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 936051 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 936051' 00:16:36.573 killing process with pid 936051 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 936051 00:16:36.573 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 936051 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.832 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.369 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:39.369 00:16:39.369 real 0m32.813s 00:16:39.369 user 1m39.186s 00:16:39.369 sys 0m6.399s 00:16:39.369 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.369 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.369 ************************************ 00:16:39.369 END TEST nvmf_rpc 00:16:39.369 ************************************ 00:16:39.369 16:30:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:39.369 16:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:39.369 16:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.369 16:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.369 ************************************ 00:16:39.369 START TEST nvmf_invalid 00:16:39.369 ************************************ 00:16:39.369 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:39.369 * Looking for test storage... 00:16:39.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.369 --rc genhtml_branch_coverage=1 00:16:39.369 --rc genhtml_function_coverage=1 00:16:39.369 --rc genhtml_legend=1 00:16:39.369 --rc geninfo_all_blocks=1 00:16:39.369 --rc geninfo_unexecuted_blocks=1 00:16:39.369 00:16:39.369 ' 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.369 --rc genhtml_branch_coverage=1 00:16:39.369 --rc genhtml_function_coverage=1 00:16:39.369 --rc genhtml_legend=1 00:16:39.369 --rc geninfo_all_blocks=1 00:16:39.369 --rc geninfo_unexecuted_blocks=1 00:16:39.369 00:16:39.369 ' 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.369 --rc genhtml_branch_coverage=1 00:16:39.369 --rc genhtml_function_coverage=1 00:16:39.369 --rc genhtml_legend=1 00:16:39.369 --rc geninfo_all_blocks=1 00:16:39.369 --rc geninfo_unexecuted_blocks=1 00:16:39.369 00:16:39.369 ' 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.369 --rc genhtml_branch_coverage=1 00:16:39.369 --rc genhtml_function_coverage=1 00:16:39.369 --rc genhtml_legend=1 00:16:39.369 --rc geninfo_all_blocks=1 00:16:39.369 --rc geninfo_unexecuted_blocks=1 00:16:39.369 00:16:39.369 ' 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.369 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:39.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:39.370 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:45.942 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:45.942 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:45.942 Found net devices under 0000:af:00.0: cvl_0_0 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:45.942 Found net devices under 0000:af:00.1: cvl_0_1 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:45.942 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.942 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.942 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.942 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:45.942 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:45.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:16:45.942 00:16:45.942 --- 10.0.0.2 ping statistics --- 00:16:45.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.942 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:16:45.942 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:16:45.943 00:16:45.943 --- 10.0.0.1 ping statistics --- 00:16:45.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.943 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=944219 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 944219 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 944219 ']' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:45.943 [2024-12-14 16:30:15.166517] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:45.943 [2024-12-14 16:30:15.166580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.943 [2024-12-14 16:30:15.243568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.943 [2024-12-14 16:30:15.267144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.943 [2024-12-14 16:30:15.267181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.943 [2024-12-14 16:30:15.267188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.943 [2024-12-14 16:30:15.267194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.943 [2024-12-14 16:30:15.267199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.943 [2024-12-14 16:30:15.268670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.943 [2024-12-14 16:30:15.268707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.943 [2024-12-14 16:30:15.268815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.943 [2024-12-14 16:30:15.268816] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode530 00:16:45.943 [2024-12-14 16:30:15.573496] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:45.943 { 00:16:45.943 "nqn": "nqn.2016-06.io.spdk:cnode530", 00:16:45.943 "tgt_name": "foobar", 00:16:45.943 "method": "nvmf_create_subsystem", 00:16:45.943 "req_id": 1 00:16:45.943 } 00:16:45.943 Got JSON-RPC error response 00:16:45.943 response: 00:16:45.943 { 00:16:45.943 "code": -32603, 00:16:45.943 "message": "Unable to find target foobar" 00:16:45.943 }' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:45.943 { 00:16:45.943 "nqn": "nqn.2016-06.io.spdk:cnode530", 00:16:45.943 "tgt_name": "foobar", 00:16:45.943 "method": "nvmf_create_subsystem", 00:16:45.943 "req_id": 1 00:16:45.943 } 00:16:45.943 Got JSON-RPC error response 00:16:45.943 response: 00:16:45.943 { 00:16:45.943 "code": -32603, 00:16:45.943 "message": "Unable to find target foobar" 00:16:45.943 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28609 00:16:45.943 [2024-12-14 16:30:15.774162] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28609: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:45.943 { 00:16:45.943 "nqn": "nqn.2016-06.io.spdk:cnode28609", 00:16:45.943 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:45.943 "method": "nvmf_create_subsystem", 00:16:45.943 "req_id": 1 00:16:45.943 } 00:16:45.943 Got JSON-RPC error response 00:16:45.943 response: 00:16:45.943 { 00:16:45.943 "code": -32602, 00:16:45.943 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:45.943 }' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:45.943 { 00:16:45.943 "nqn": "nqn.2016-06.io.spdk:cnode28609", 00:16:45.943 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:45.943 "method": "nvmf_create_subsystem", 00:16:45.943 "req_id": 1 00:16:45.943 } 00:16:45.943 Got JSON-RPC error response 00:16:45.943 response: 00:16:45.943 { 00:16:45.943 "code": -32602, 00:16:45.943 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:45.943 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11482 00:16:45.943 [2024-12-14 16:30:15.966801] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11482: invalid model number 'SPDK_Controller' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:45.943 { 00:16:45.943 "nqn": "nqn.2016-06.io.spdk:cnode11482", 00:16:45.943 "model_number": "SPDK_Controller\u001f", 00:16:45.943 "method": "nvmf_create_subsystem", 00:16:45.943 "req_id": 1 00:16:45.943 } 00:16:45.943 Got JSON-RPC error response 00:16:45.943 response: 00:16:45.943 { 00:16:45.943 "code": -32602, 00:16:45.943 "message": "Invalid MN SPDK_Controller\u001f" 00:16:45.943 }' 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:45.943 { 00:16:45.943 "nqn": "nqn.2016-06.io.spdk:cnode11482", 00:16:45.943 "model_number": "SPDK_Controller\u001f", 00:16:45.943 "method": "nvmf_create_subsystem", 00:16:45.943 "req_id": 1 00:16:45.943 } 00:16:45.943 Got JSON-RPC error response 00:16:45.943 response: 00:16:45.943 { 00:16:45.943 "code": -32602, 00:16:45.943 "message": "Invalid MN SPDK_Controller\u001f" 00:16:45.943 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:45.943 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.943 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:45.944 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:45.944 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:45.944 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.944 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ';-WVsRM"cE6qTw4H=!,a' 00:16:46.203 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ';-WVsRM"cE6qTw4H=!,a' nqn.2016-06.io.spdk:cnode23963 00:16:46.463 [2024-12-14 16:30:16.324028] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23963: invalid serial number ';-WVsRM"cE6qTw4H=!,a' 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:46.463 { 00:16:46.463 "nqn": "nqn.2016-06.io.spdk:cnode23963", 00:16:46.463 "serial_number": ";-W\u007fVsRM\"cE6qTw4H=!,a", 00:16:46.463 "method": "nvmf_create_subsystem", 00:16:46.463 "req_id": 1 00:16:46.463 } 00:16:46.463 Got JSON-RPC error response 00:16:46.463 response: 00:16:46.463 { 00:16:46.463 "code": -32602, 00:16:46.463 "message": "Invalid SN ;-W\u007fVsRM\"cE6qTw4H=!,a" 00:16:46.463 }' 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:46.463 { 00:16:46.463 "nqn": "nqn.2016-06.io.spdk:cnode23963", 00:16:46.463 "serial_number": ";-W\u007fVsRM\"cE6qTw4H=!,a", 00:16:46.463 "method": "nvmf_create_subsystem", 00:16:46.463 "req_id": 1 00:16:46.463 } 00:16:46.463 Got JSON-RPC error response 00:16:46.463 response: 00:16:46.463 { 00:16:46.463 "code": -32602, 00:16:46.463 "message": "Invalid SN ;-W\u007fVsRM\"cE6qTw4H=!,a" 00:16:46.463 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:46.463 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.464 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.465 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ? == \- ]] 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '?>,/w=W0#k+o)BR&4cNcc,+R-}RU:8:js~^{ESlg.' 00:16:46.724 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '?>,/w=W0#k+o)BR&4cNcc,+R-}RU:8:js~^{ESlg.' nqn.2016-06.io.spdk:cnode17496 00:16:46.724 [2024-12-14 16:30:16.797552] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17496: invalid model number '?>,/w=W0#k+o)BR&4cNcc,+R-}RU:8:js~^{ESlg.' 00:16:46.983 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:46.983 { 00:16:46.983 "nqn": "nqn.2016-06.io.spdk:cnode17496", 00:16:46.983 "model_number": "?>,/w=W0#k+o)BR&4cNcc,+R-}RU:8:js~^{ESlg.", 00:16:46.983 "method": "nvmf_create_subsystem", 00:16:46.983 "req_id": 1 00:16:46.983 } 00:16:46.983 Got JSON-RPC error response 00:16:46.983 response: 00:16:46.983 { 00:16:46.983 "code": -32602, 00:16:46.983 "message": "Invalid MN ?>,/w=W0#k+o)BR&4cNcc,+R-}RU:8:js~^{ESlg." 00:16:46.983 }' 00:16:46.983 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:46.983 { 00:16:46.983 "nqn": "nqn.2016-06.io.spdk:cnode17496", 00:16:46.983 "model_number": "?>,/w=W0#k+o)BR&4cNcc,+R-}RU:8:js~^{ESlg.", 00:16:46.983 "method": "nvmf_create_subsystem", 00:16:46.983 "req_id": 1 00:16:46.983 } 00:16:46.983 Got JSON-RPC error response 00:16:46.983 response: 00:16:46.983 { 00:16:46.983 "code": -32602, 00:16:46.983 "message": "Invalid MN ?>,/w=W0#k+o)BR&4cNcc,+R-}RU:8:js~^{ESlg." 00:16:46.983 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:46.983 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:46.983 [2024-12-14 16:30:16.994260] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.983 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:47.242 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:47.242 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:47.242 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:47.242 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:47.242 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:47.501 [2024-12-14 16:30:17.403584] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:47.501 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:47.501 { 00:16:47.501 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:47.501 "listen_address": { 00:16:47.501 "trtype": "tcp", 00:16:47.501 "traddr": "", 00:16:47.501 "trsvcid": "4421" 00:16:47.501 }, 00:16:47.501 "method": "nvmf_subsystem_remove_listener", 00:16:47.501 "req_id": 1 00:16:47.501 } 00:16:47.501 Got JSON-RPC error response 00:16:47.501 response: 00:16:47.501 { 00:16:47.501 "code": -32602, 00:16:47.501 "message": "Invalid parameters" 00:16:47.501 }' 00:16:47.501 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:47.501 { 00:16:47.501 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:47.501 "listen_address": { 00:16:47.501 "trtype": "tcp", 00:16:47.501 "traddr": "", 00:16:47.501 "trsvcid": "4421" 00:16:47.501 }, 00:16:47.501 "method": "nvmf_subsystem_remove_listener", 00:16:47.501 "req_id": 1 00:16:47.501 } 00:16:47.501 Got JSON-RPC error response 00:16:47.501 response: 00:16:47.501 { 00:16:47.501 "code": -32602, 00:16:47.501 "message": "Invalid parameters" 00:16:47.501 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:47.501 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31051 -i 0 00:16:47.760 [2024-12-14 16:30:17.616231] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31051: invalid cntlid range [0-65519] 00:16:47.760 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:47.760 { 00:16:47.760 "nqn": "nqn.2016-06.io.spdk:cnode31051", 00:16:47.760 "min_cntlid": 0, 00:16:47.760 "method": "nvmf_create_subsystem", 00:16:47.760 "req_id": 1 00:16:47.760 } 00:16:47.760 Got JSON-RPC error response 00:16:47.760 response: 00:16:47.760 { 00:16:47.760 "code": -32602, 00:16:47.760 "message": "Invalid cntlid range [0-65519]" 00:16:47.760 }' 00:16:47.760 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:47.760 { 00:16:47.760 "nqn": "nqn.2016-06.io.spdk:cnode31051", 00:16:47.760 "min_cntlid": 0, 00:16:47.760 "method": "nvmf_create_subsystem", 00:16:47.760 "req_id": 1 00:16:47.760 } 00:16:47.760 Got JSON-RPC error response 00:16:47.760 response: 00:16:47.760 { 00:16:47.760 "code": -32602, 00:16:47.760 "message": "Invalid cntlid range [0-65519]" 00:16:47.760 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:47.760 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7152 -i 65520 00:16:47.760 [2024-12-14 16:30:17.816913] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7152: invalid cntlid range [65520-65519] 00:16:48.019 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:48.019 { 00:16:48.019 "nqn": "nqn.2016-06.io.spdk:cnode7152", 00:16:48.019 "min_cntlid": 65520, 00:16:48.019 "method": "nvmf_create_subsystem", 00:16:48.019 "req_id": 1 00:16:48.019 } 00:16:48.019 Got JSON-RPC error response 00:16:48.019 response: 00:16:48.019 { 00:16:48.019 "code": -32602, 00:16:48.019 "message": "Invalid cntlid range [65520-65519]" 00:16:48.019 }' 00:16:48.019 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:48.019 { 00:16:48.019 "nqn": "nqn.2016-06.io.spdk:cnode7152", 00:16:48.019 "min_cntlid": 65520, 00:16:48.019 "method": "nvmf_create_subsystem", 00:16:48.019 "req_id": 1 00:16:48.019 } 00:16:48.019 Got JSON-RPC error response 00:16:48.019 response: 00:16:48.019 { 00:16:48.019 "code": -32602, 00:16:48.019 "message": "Invalid cntlid range [65520-65519]" 00:16:48.019 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:48.019 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32226 -I 0 00:16:48.019 [2024-12-14 16:30:18.025632] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32226: invalid cntlid range [1-0] 00:16:48.019 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:48.019 { 00:16:48.019 "nqn": "nqn.2016-06.io.spdk:cnode32226", 00:16:48.019 "max_cntlid": 0, 00:16:48.019 "method": "nvmf_create_subsystem", 00:16:48.019 "req_id": 1 00:16:48.019 } 00:16:48.019 Got JSON-RPC error response 00:16:48.019 response: 00:16:48.019 { 00:16:48.019 "code": -32602, 00:16:48.019 "message": "Invalid cntlid range [1-0]" 00:16:48.019 }' 00:16:48.019 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:48.019 { 00:16:48.019 "nqn": "nqn.2016-06.io.spdk:cnode32226", 00:16:48.019 "max_cntlid": 0, 00:16:48.019 "method": "nvmf_create_subsystem", 00:16:48.019 "req_id": 1 00:16:48.019 } 00:16:48.019 Got JSON-RPC error response 00:16:48.019 response: 00:16:48.019 { 00:16:48.019 "code": -32602, 00:16:48.019 "message": "Invalid cntlid range [1-0]" 00:16:48.019 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:48.019 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15228 -I 65520 00:16:48.278 [2024-12-14 16:30:18.226306] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15228: invalid cntlid range [1-65520] 00:16:48.278 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:48.278 { 00:16:48.278 "nqn": "nqn.2016-06.io.spdk:cnode15228", 00:16:48.278 "max_cntlid": 65520, 00:16:48.278 "method": "nvmf_create_subsystem", 00:16:48.278 "req_id": 1 00:16:48.278 } 00:16:48.278 Got JSON-RPC error response 00:16:48.278 response: 00:16:48.278 { 00:16:48.278 "code": -32602, 00:16:48.278 "message": "Invalid cntlid range [1-65520]" 00:16:48.278 }' 00:16:48.279 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:48.279 { 00:16:48.279 "nqn": "nqn.2016-06.io.spdk:cnode15228", 00:16:48.279 "max_cntlid": 65520, 00:16:48.279 "method": "nvmf_create_subsystem", 00:16:48.279 "req_id": 1 00:16:48.279 } 00:16:48.279 Got JSON-RPC error response 00:16:48.279 response: 00:16:48.279 { 00:16:48.279 "code": -32602, 00:16:48.279 "message": "Invalid cntlid range [1-65520]" 00:16:48.279 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:48.279 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12821 -i 6 -I 5 00:16:48.537 [2024-12-14 16:30:18.414999] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12821: invalid cntlid range [6-5] 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:48.537 { 00:16:48.537 "nqn": "nqn.2016-06.io.spdk:cnode12821", 00:16:48.537 "min_cntlid": 6, 00:16:48.537 "max_cntlid": 5, 00:16:48.537 "method": "nvmf_create_subsystem", 00:16:48.537 "req_id": 1 00:16:48.537 } 00:16:48.537 Got JSON-RPC error response 00:16:48.537 response: 00:16:48.537 { 00:16:48.537 "code": -32602, 00:16:48.537 "message": "Invalid cntlid range [6-5]" 00:16:48.537 }' 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:48.537 { 00:16:48.537 "nqn": "nqn.2016-06.io.spdk:cnode12821", 00:16:48.537 "min_cntlid": 6, 00:16:48.537 "max_cntlid": 5, 00:16:48.537 "method": "nvmf_create_subsystem", 00:16:48.537 "req_id": 1 00:16:48.537 } 00:16:48.537 Got JSON-RPC error response 00:16:48.537 response: 00:16:48.537 { 00:16:48.537 "code": -32602, 00:16:48.537 "message": "Invalid cntlid range [6-5]" 00:16:48.537 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:48.537 { 00:16:48.537 "name": "foobar", 00:16:48.537 "method": "nvmf_delete_target", 00:16:48.537 "req_id": 1 00:16:48.537 } 00:16:48.537 Got JSON-RPC error response 00:16:48.537 response: 00:16:48.537 { 00:16:48.537 "code": -32602, 00:16:48.537 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:48.537 }' 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:48.537 { 00:16:48.537 "name": "foobar", 00:16:48.537 "method": "nvmf_delete_target", 00:16:48.537 "req_id": 1 00:16:48.537 } 00:16:48.537 Got JSON-RPC error response 00:16:48.537 response: 00:16:48.537 { 00:16:48.537 "code": -32602, 00:16:48.537 "message": "The specified target doesn't exist, cannot delete it." 00:16:48.537 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:48.537 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:48.537 rmmod nvme_tcp 00:16:48.537 rmmod nvme_fabrics 00:16:48.537 rmmod nvme_keyring 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 944219 ']' 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 944219 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 944219 ']' 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 944219 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 944219 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 944219' 00:16:48.797 killing process with pid 944219 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 944219 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 944219 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.797 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.334 16:30:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:51.334 00:16:51.334 real 0m11.945s 00:16:51.334 user 0m18.423s 00:16:51.334 sys 0m5.411s 00:16:51.334 16:30:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.334 16:30:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:51.334 ************************************ 00:16:51.334 END TEST nvmf_invalid 00:16:51.334 ************************************ 00:16:51.334 16:30:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:51.334 16:30:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:51.334 16:30:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.334 16:30:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.334 ************************************ 00:16:51.334 START TEST nvmf_connect_stress 00:16:51.334 ************************************ 00:16:51.334 16:30:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:51.334 * Looking for test storage... 00:16:51.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:51.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.334 --rc genhtml_branch_coverage=1 00:16:51.334 --rc genhtml_function_coverage=1 00:16:51.334 --rc genhtml_legend=1 00:16:51.334 --rc geninfo_all_blocks=1 00:16:51.334 --rc geninfo_unexecuted_blocks=1 00:16:51.334 00:16:51.334 ' 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:51.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.334 --rc genhtml_branch_coverage=1 00:16:51.334 --rc genhtml_function_coverage=1 00:16:51.334 --rc genhtml_legend=1 00:16:51.334 --rc geninfo_all_blocks=1 00:16:51.334 --rc geninfo_unexecuted_blocks=1 00:16:51.334 00:16:51.334 ' 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:51.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.334 --rc genhtml_branch_coverage=1 00:16:51.334 --rc genhtml_function_coverage=1 00:16:51.334 --rc genhtml_legend=1 00:16:51.334 --rc geninfo_all_blocks=1 00:16:51.334 --rc geninfo_unexecuted_blocks=1 00:16:51.334 00:16:51.334 ' 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:51.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.334 --rc genhtml_branch_coverage=1 00:16:51.334 --rc genhtml_function_coverage=1 00:16:51.334 --rc genhtml_legend=1 00:16:51.334 --rc geninfo_all_blocks=1 00:16:51.334 --rc geninfo_unexecuted_blocks=1 00:16:51.334 00:16:51.334 ' 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.334 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:51.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:51.335 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.906 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.906 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:57.906 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:57.906 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:57.906 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:57.906 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:57.907 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:57.907 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:57.907 Found net devices under 0000:af:00.0: cvl_0_0 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:57.907 Found net devices under 0000:af:00.1: cvl_0_1 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:57.907 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.907 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.907 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.907 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:57.907 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:57.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:16:57.907 00:16:57.907 --- 10.0.0.2 ping statistics --- 00:16:57.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.907 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:16:57.907 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:16:57.907 00:16:57.907 --- 10.0.0.1 ping statistics --- 00:16:57.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.907 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:16:57.907 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.907 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:57.907 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:57.907 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.907 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:57.907 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=948323 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 948323 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 948323 ']' 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.908 [2024-12-14 16:30:27.140049] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:57.908 [2024-12-14 16:30:27.140102] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.908 [2024-12-14 16:30:27.217723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:57.908 [2024-12-14 16:30:27.240535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.908 [2024-12-14 16:30:27.240579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.908 [2024-12-14 16:30:27.240586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.908 [2024-12-14 16:30:27.240592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.908 [2024-12-14 16:30:27.240597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.908 [2024-12-14 16:30:27.241803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.908 [2024-12-14 16:30:27.241916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.908 [2024-12-14 16:30:27.241918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.908 [2024-12-14 16:30:27.373584] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.908 [2024-12-14 16:30:27.393834] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.908 NULL1 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=948511 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:16:57.908 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.909 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.909 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.167 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.167 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:16:58.167 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.167 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.167 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.426 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.426 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:16:58.426 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.426 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.426 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.993 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.993 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:16:58.993 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.993 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.993 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.251 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.251 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:16:59.251 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.251 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.251 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.509 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.509 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:16:59.509 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.509 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.510 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.768 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.768 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:16:59.768 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.768 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.768 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.026 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.026 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:00.026 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.026 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.026 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.594 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.594 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:00.594 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.594 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.594 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.853 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.853 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:00.853 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.853 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.853 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.111 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.111 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:01.112 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.112 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.112 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.370 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.370 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:01.370 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.370 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.370 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.938 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.938 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:01.938 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.938 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.938 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.196 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.196 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:02.196 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.196 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.196 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.455 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.455 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:02.455 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.455 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.455 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.713 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.713 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:02.713 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.713 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.713 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.972 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.972 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:02.972 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.972 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.972 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.540 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.540 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:03.540 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.540 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.540 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.798 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.798 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:03.798 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.798 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.798 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.057 16:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.057 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:04.057 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.057 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.057 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.315 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.315 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:04.315 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.315 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.316 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.574 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.574 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:04.574 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.574 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.574 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.141 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.141 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:05.141 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.141 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.141 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.398 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.398 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:05.398 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.398 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.398 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.655 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.655 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:05.655 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.655 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.655 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.914 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.914 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:05.914 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.914 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.914 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.482 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.482 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:06.482 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.482 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.482 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.740 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.740 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:06.740 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.740 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.740 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:06.999 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.999 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:06.999 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.999 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.999 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.258 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.258 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:07.258 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.258 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.258 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:07.517 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 948511 00:17:07.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (948511) - No such process 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 948511 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:07.517 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:07.517 rmmod nvme_tcp 00:17:07.776 rmmod nvme_fabrics 00:17:07.776 rmmod nvme_keyring 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 948323 ']' 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 948323 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 948323 ']' 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 948323 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 948323 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 948323' 00:17:07.776 killing process with pid 948323 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 948323 00:17:07.776 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 948323 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.035 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.941 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:09.941 00:17:09.941 real 0m18.952s 00:17:09.941 user 0m39.258s 00:17:09.941 sys 0m8.651s 00:17:09.941 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.941 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.941 ************************************ 00:17:09.941 END TEST nvmf_connect_stress 00:17:09.941 ************************************ 00:17:09.941 16:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:09.941 16:30:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:09.941 16:30:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.941 16:30:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:09.941 ************************************ 00:17:09.941 START TEST nvmf_fused_ordering 00:17:09.941 ************************************ 00:17:09.941 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:10.277 * Looking for test storage... 00:17:10.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.277 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:10.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.278 --rc genhtml_branch_coverage=1 00:17:10.278 --rc genhtml_function_coverage=1 00:17:10.278 --rc genhtml_legend=1 00:17:10.278 --rc geninfo_all_blocks=1 00:17:10.278 --rc geninfo_unexecuted_blocks=1 00:17:10.278 00:17:10.278 ' 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:10.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.278 --rc genhtml_branch_coverage=1 00:17:10.278 --rc genhtml_function_coverage=1 00:17:10.278 --rc genhtml_legend=1 00:17:10.278 --rc geninfo_all_blocks=1 00:17:10.278 --rc geninfo_unexecuted_blocks=1 00:17:10.278 00:17:10.278 ' 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:10.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.278 --rc genhtml_branch_coverage=1 00:17:10.278 --rc genhtml_function_coverage=1 00:17:10.278 --rc genhtml_legend=1 00:17:10.278 --rc geninfo_all_blocks=1 00:17:10.278 --rc geninfo_unexecuted_blocks=1 00:17:10.278 00:17:10.278 ' 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:10.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.278 --rc genhtml_branch_coverage=1 00:17:10.278 --rc genhtml_function_coverage=1 00:17:10.278 --rc genhtml_legend=1 00:17:10.278 --rc geninfo_all_blocks=1 00:17:10.278 --rc geninfo_unexecuted_blocks=1 00:17:10.278 00:17:10.278 ' 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.278 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:16.895 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.895 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:16.895 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:16.895 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:16.895 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:16.895 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:16.895 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:16.895 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:16.895 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:16.895 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:16.895 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:16.896 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:16.896 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:16.896 Found net devices under 0000:af:00.0: cvl_0_0 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:16.896 Found net devices under 0000:af:00.1: cvl_0_1 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:16.896 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:16.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:17:16.896 00:17:16.896 --- 10.0.0.2 ping statistics --- 00:17:16.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.896 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:17:16.896 00:17:16.896 --- 10.0.0.1 ping statistics --- 00:17:16.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.896 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:16.896 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=953607 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 953607 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 953607 ']' 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 [2024-12-14 16:30:46.182022] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:16.897 [2024-12-14 16:30:46.182071] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.897 [2024-12-14 16:30:46.260145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.897 [2024-12-14 16:30:46.281892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.897 [2024-12-14 16:30:46.281930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.897 [2024-12-14 16:30:46.281936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.897 [2024-12-14 16:30:46.281942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.897 [2024-12-14 16:30:46.281947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.897 [2024-12-14 16:30:46.282438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 [2024-12-14 16:30:46.412901] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 [2024-12-14 16:30:46.437080] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 NULL1 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.897 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:16.897 [2024-12-14 16:30:46.497466] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:16.897 [2024-12-14 16:30:46.497498] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953631 ] 00:17:16.897 Attached to nqn.2016-06.io.spdk:cnode1 00:17:16.897 Namespace ID: 1 size: 1GB 00:17:16.897 fused_ordering(0) 00:17:16.897 fused_ordering(1) 00:17:16.897 fused_ordering(2) 00:17:16.897 fused_ordering(3) 00:17:16.897 fused_ordering(4) 00:17:16.897 fused_ordering(5) 00:17:16.897 fused_ordering(6) 00:17:16.897 fused_ordering(7) 00:17:16.897 fused_ordering(8) 00:17:16.897 fused_ordering(9) 00:17:16.897 fused_ordering(10) 00:17:16.897 fused_ordering(11) 00:17:16.897 fused_ordering(12) 00:17:16.897 fused_ordering(13) 00:17:16.897 fused_ordering(14) 00:17:16.897 fused_ordering(15) 00:17:16.897 fused_ordering(16) 00:17:16.897 fused_ordering(17) 00:17:16.897 fused_ordering(18) 00:17:16.897 fused_ordering(19) 00:17:16.897 fused_ordering(20) 00:17:16.897 fused_ordering(21) 00:17:16.897 fused_ordering(22) 00:17:16.897 fused_ordering(23) 00:17:16.897 fused_ordering(24) 00:17:16.897 fused_ordering(25) 00:17:16.897 fused_ordering(26) 00:17:16.897 fused_ordering(27) 00:17:16.897 fused_ordering(28) 00:17:16.897 fused_ordering(29) 00:17:16.897 fused_ordering(30) 00:17:16.897 fused_ordering(31) 00:17:16.897 fused_ordering(32) 00:17:16.897 fused_ordering(33) 00:17:16.897 fused_ordering(34) 00:17:16.897 fused_ordering(35) 00:17:16.897 fused_ordering(36) 00:17:16.897 fused_ordering(37) 00:17:16.897 fused_ordering(38) 00:17:16.897 fused_ordering(39) 00:17:16.897 fused_ordering(40) 00:17:16.897 fused_ordering(41) 00:17:16.897 fused_ordering(42) 00:17:16.897 fused_ordering(43) 00:17:16.897 fused_ordering(44) 00:17:16.897 fused_ordering(45) 00:17:16.897 fused_ordering(46) 00:17:16.897 fused_ordering(47) 00:17:16.897 fused_ordering(48) 00:17:16.897 fused_ordering(49) 00:17:16.897 fused_ordering(50) 00:17:16.897 fused_ordering(51) 00:17:16.897 fused_ordering(52) 00:17:16.897 fused_ordering(53) 00:17:16.897 fused_ordering(54) 00:17:16.897 fused_ordering(55) 00:17:16.897 fused_ordering(56) 00:17:16.897 fused_ordering(57) 00:17:16.897 fused_ordering(58) 00:17:16.897 fused_ordering(59) 00:17:16.897 fused_ordering(60) 00:17:16.897 fused_ordering(61) 00:17:16.897 fused_ordering(62) 00:17:16.897 fused_ordering(63) 00:17:16.897 fused_ordering(64) 00:17:16.897 fused_ordering(65) 00:17:16.897 fused_ordering(66) 00:17:16.897 fused_ordering(67) 00:17:16.897 fused_ordering(68) 00:17:16.897 fused_ordering(69) 00:17:16.897 fused_ordering(70) 00:17:16.897 fused_ordering(71) 00:17:16.897 fused_ordering(72) 00:17:16.897 fused_ordering(73) 00:17:16.897 fused_ordering(74) 00:17:16.897 fused_ordering(75) 00:17:16.897 fused_ordering(76) 00:17:16.897 fused_ordering(77) 00:17:16.897 fused_ordering(78) 00:17:16.897 fused_ordering(79) 00:17:16.897 fused_ordering(80) 00:17:16.897 fused_ordering(81) 00:17:16.897 fused_ordering(82) 00:17:16.897 fused_ordering(83) 00:17:16.897 fused_ordering(84) 00:17:16.897 fused_ordering(85) 00:17:16.897 fused_ordering(86) 00:17:16.897 fused_ordering(87) 00:17:16.897 fused_ordering(88) 00:17:16.897 fused_ordering(89) 00:17:16.897 fused_ordering(90) 00:17:16.897 fused_ordering(91) 00:17:16.897 fused_ordering(92) 00:17:16.897 fused_ordering(93) 00:17:16.897 fused_ordering(94) 00:17:16.897 fused_ordering(95) 00:17:16.897 fused_ordering(96) 00:17:16.897 fused_ordering(97) 00:17:16.897 fused_ordering(98) 00:17:16.897 fused_ordering(99) 00:17:16.897 fused_ordering(100) 00:17:16.897 fused_ordering(101) 00:17:16.897 fused_ordering(102) 00:17:16.897 fused_ordering(103) 00:17:16.897 fused_ordering(104) 00:17:16.897 fused_ordering(105) 00:17:16.897 fused_ordering(106) 00:17:16.897 fused_ordering(107) 00:17:16.897 fused_ordering(108) 00:17:16.897 fused_ordering(109) 00:17:16.897 fused_ordering(110) 00:17:16.897 fused_ordering(111) 00:17:16.897 fused_ordering(112) 00:17:16.897 fused_ordering(113) 00:17:16.897 fused_ordering(114) 00:17:16.897 fused_ordering(115) 00:17:16.897 fused_ordering(116) 00:17:16.898 fused_ordering(117) 00:17:16.898 fused_ordering(118) 00:17:16.898 fused_ordering(119) 00:17:16.898 fused_ordering(120) 00:17:16.898 fused_ordering(121) 00:17:16.898 fused_ordering(122) 00:17:16.898 fused_ordering(123) 00:17:16.898 fused_ordering(124) 00:17:16.898 fused_ordering(125) 00:17:16.898 fused_ordering(126) 00:17:16.898 fused_ordering(127) 00:17:16.898 fused_ordering(128) 00:17:16.898 fused_ordering(129) 00:17:16.898 fused_ordering(130) 00:17:16.898 fused_ordering(131) 00:17:16.898 fused_ordering(132) 00:17:16.898 fused_ordering(133) 00:17:16.898 fused_ordering(134) 00:17:16.898 fused_ordering(135) 00:17:16.898 fused_ordering(136) 00:17:16.898 fused_ordering(137) 00:17:16.898 fused_ordering(138) 00:17:16.898 fused_ordering(139) 00:17:16.898 fused_ordering(140) 00:17:16.898 fused_ordering(141) 00:17:16.898 fused_ordering(142) 00:17:16.898 fused_ordering(143) 00:17:16.898 fused_ordering(144) 00:17:16.898 fused_ordering(145) 00:17:16.898 fused_ordering(146) 00:17:16.898 fused_ordering(147) 00:17:16.898 fused_ordering(148) 00:17:16.898 fused_ordering(149) 00:17:16.898 fused_ordering(150) 00:17:16.898 fused_ordering(151) 00:17:16.898 fused_ordering(152) 00:17:16.898 fused_ordering(153) 00:17:16.898 fused_ordering(154) 00:17:16.898 fused_ordering(155) 00:17:16.898 fused_ordering(156) 00:17:16.898 fused_ordering(157) 00:17:16.898 fused_ordering(158) 00:17:16.898 fused_ordering(159) 00:17:16.898 fused_ordering(160) 00:17:16.898 fused_ordering(161) 00:17:16.898 fused_ordering(162) 00:17:16.898 fused_ordering(163) 00:17:16.898 fused_ordering(164) 00:17:16.898 fused_ordering(165) 00:17:16.898 fused_ordering(166) 00:17:16.898 fused_ordering(167) 00:17:16.898 fused_ordering(168) 00:17:16.898 fused_ordering(169) 00:17:16.898 fused_ordering(170) 00:17:16.898 fused_ordering(171) 00:17:16.898 fused_ordering(172) 00:17:16.898 fused_ordering(173) 00:17:16.898 fused_ordering(174) 00:17:16.898 fused_ordering(175) 00:17:16.898 fused_ordering(176) 00:17:16.898 fused_ordering(177) 00:17:16.898 fused_ordering(178) 00:17:16.898 fused_ordering(179) 00:17:16.898 fused_ordering(180) 00:17:16.898 fused_ordering(181) 00:17:16.898 fused_ordering(182) 00:17:16.898 fused_ordering(183) 00:17:16.898 fused_ordering(184) 00:17:16.898 fused_ordering(185) 00:17:16.898 fused_ordering(186) 00:17:16.898 fused_ordering(187) 00:17:16.898 fused_ordering(188) 00:17:16.898 fused_ordering(189) 00:17:16.898 fused_ordering(190) 00:17:16.898 fused_ordering(191) 00:17:16.898 fused_ordering(192) 00:17:16.898 fused_ordering(193) 00:17:16.898 fused_ordering(194) 00:17:16.898 fused_ordering(195) 00:17:16.898 fused_ordering(196) 00:17:16.898 fused_ordering(197) 00:17:16.898 fused_ordering(198) 00:17:16.898 fused_ordering(199) 00:17:16.898 fused_ordering(200) 00:17:16.898 fused_ordering(201) 00:17:16.898 fused_ordering(202) 00:17:16.898 fused_ordering(203) 00:17:16.898 fused_ordering(204) 00:17:16.898 fused_ordering(205) 00:17:17.156 fused_ordering(206) 00:17:17.156 fused_ordering(207) 00:17:17.156 fused_ordering(208) 00:17:17.156 fused_ordering(209) 00:17:17.156 fused_ordering(210) 00:17:17.156 fused_ordering(211) 00:17:17.156 fused_ordering(212) 00:17:17.156 fused_ordering(213) 00:17:17.156 fused_ordering(214) 00:17:17.156 fused_ordering(215) 00:17:17.156 fused_ordering(216) 00:17:17.156 fused_ordering(217) 00:17:17.156 fused_ordering(218) 00:17:17.156 fused_ordering(219) 00:17:17.156 fused_ordering(220) 00:17:17.156 fused_ordering(221) 00:17:17.156 fused_ordering(222) 00:17:17.156 fused_ordering(223) 00:17:17.156 fused_ordering(224) 00:17:17.156 fused_ordering(225) 00:17:17.156 fused_ordering(226) 00:17:17.156 fused_ordering(227) 00:17:17.156 fused_ordering(228) 00:17:17.156 fused_ordering(229) 00:17:17.156 fused_ordering(230) 00:17:17.156 fused_ordering(231) 00:17:17.156 fused_ordering(232) 00:17:17.156 fused_ordering(233) 00:17:17.156 fused_ordering(234) 00:17:17.156 fused_ordering(235) 00:17:17.156 fused_ordering(236) 00:17:17.156 fused_ordering(237) 00:17:17.157 fused_ordering(238) 00:17:17.157 fused_ordering(239) 00:17:17.157 fused_ordering(240) 00:17:17.157 fused_ordering(241) 00:17:17.157 fused_ordering(242) 00:17:17.157 fused_ordering(243) 00:17:17.157 fused_ordering(244) 00:17:17.157 fused_ordering(245) 00:17:17.157 fused_ordering(246) 00:17:17.157 fused_ordering(247) 00:17:17.157 fused_ordering(248) 00:17:17.157 fused_ordering(249) 00:17:17.157 fused_ordering(250) 00:17:17.157 fused_ordering(251) 00:17:17.157 fused_ordering(252) 00:17:17.157 fused_ordering(253) 00:17:17.157 fused_ordering(254) 00:17:17.157 fused_ordering(255) 00:17:17.157 fused_ordering(256) 00:17:17.157 fused_ordering(257) 00:17:17.157 fused_ordering(258) 00:17:17.157 fused_ordering(259) 00:17:17.157 fused_ordering(260) 00:17:17.157 fused_ordering(261) 00:17:17.157 fused_ordering(262) 00:17:17.157 fused_ordering(263) 00:17:17.157 fused_ordering(264) 00:17:17.157 fused_ordering(265) 00:17:17.157 fused_ordering(266) 00:17:17.157 fused_ordering(267) 00:17:17.157 fused_ordering(268) 00:17:17.157 fused_ordering(269) 00:17:17.157 fused_ordering(270) 00:17:17.157 fused_ordering(271) 00:17:17.157 fused_ordering(272) 00:17:17.157 fused_ordering(273) 00:17:17.157 fused_ordering(274) 00:17:17.157 fused_ordering(275) 00:17:17.157 fused_ordering(276) 00:17:17.157 fused_ordering(277) 00:17:17.157 fused_ordering(278) 00:17:17.157 fused_ordering(279) 00:17:17.157 fused_ordering(280) 00:17:17.157 fused_ordering(281) 00:17:17.157 fused_ordering(282) 00:17:17.157 fused_ordering(283) 00:17:17.157 fused_ordering(284) 00:17:17.157 fused_ordering(285) 00:17:17.157 fused_ordering(286) 00:17:17.157 fused_ordering(287) 00:17:17.157 fused_ordering(288) 00:17:17.157 fused_ordering(289) 00:17:17.157 fused_ordering(290) 00:17:17.157 fused_ordering(291) 00:17:17.157 fused_ordering(292) 00:17:17.157 fused_ordering(293) 00:17:17.157 fused_ordering(294) 00:17:17.157 fused_ordering(295) 00:17:17.157 fused_ordering(296) 00:17:17.157 fused_ordering(297) 00:17:17.157 fused_ordering(298) 00:17:17.157 fused_ordering(299) 00:17:17.157 fused_ordering(300) 00:17:17.157 fused_ordering(301) 00:17:17.157 fused_ordering(302) 00:17:17.157 fused_ordering(303) 00:17:17.157 fused_ordering(304) 00:17:17.157 fused_ordering(305) 00:17:17.157 fused_ordering(306) 00:17:17.157 fused_ordering(307) 00:17:17.157 fused_ordering(308) 00:17:17.157 fused_ordering(309) 00:17:17.157 fused_ordering(310) 00:17:17.157 fused_ordering(311) 00:17:17.157 fused_ordering(312) 00:17:17.157 fused_ordering(313) 00:17:17.157 fused_ordering(314) 00:17:17.157 fused_ordering(315) 00:17:17.157 fused_ordering(316) 00:17:17.157 fused_ordering(317) 00:17:17.157 fused_ordering(318) 00:17:17.157 fused_ordering(319) 00:17:17.157 fused_ordering(320) 00:17:17.157 fused_ordering(321) 00:17:17.157 fused_ordering(322) 00:17:17.157 fused_ordering(323) 00:17:17.157 fused_ordering(324) 00:17:17.157 fused_ordering(325) 00:17:17.157 fused_ordering(326) 00:17:17.157 fused_ordering(327) 00:17:17.157 fused_ordering(328) 00:17:17.157 fused_ordering(329) 00:17:17.157 fused_ordering(330) 00:17:17.157 fused_ordering(331) 00:17:17.157 fused_ordering(332) 00:17:17.157 fused_ordering(333) 00:17:17.157 fused_ordering(334) 00:17:17.157 fused_ordering(335) 00:17:17.157 fused_ordering(336) 00:17:17.157 fused_ordering(337) 00:17:17.157 fused_ordering(338) 00:17:17.157 fused_ordering(339) 00:17:17.157 fused_ordering(340) 00:17:17.157 fused_ordering(341) 00:17:17.157 fused_ordering(342) 00:17:17.157 fused_ordering(343) 00:17:17.157 fused_ordering(344) 00:17:17.157 fused_ordering(345) 00:17:17.157 fused_ordering(346) 00:17:17.157 fused_ordering(347) 00:17:17.157 fused_ordering(348) 00:17:17.157 fused_ordering(349) 00:17:17.157 fused_ordering(350) 00:17:17.157 fused_ordering(351) 00:17:17.157 fused_ordering(352) 00:17:17.157 fused_ordering(353) 00:17:17.157 fused_ordering(354) 00:17:17.157 fused_ordering(355) 00:17:17.157 fused_ordering(356) 00:17:17.157 fused_ordering(357) 00:17:17.157 fused_ordering(358) 00:17:17.157 fused_ordering(359) 00:17:17.157 fused_ordering(360) 00:17:17.157 fused_ordering(361) 00:17:17.157 fused_ordering(362) 00:17:17.157 fused_ordering(363) 00:17:17.157 fused_ordering(364) 00:17:17.157 fused_ordering(365) 00:17:17.157 fused_ordering(366) 00:17:17.157 fused_ordering(367) 00:17:17.157 fused_ordering(368) 00:17:17.157 fused_ordering(369) 00:17:17.157 fused_ordering(370) 00:17:17.157 fused_ordering(371) 00:17:17.157 fused_ordering(372) 00:17:17.157 fused_ordering(373) 00:17:17.157 fused_ordering(374) 00:17:17.157 fused_ordering(375) 00:17:17.157 fused_ordering(376) 00:17:17.157 fused_ordering(377) 00:17:17.157 fused_ordering(378) 00:17:17.157 fused_ordering(379) 00:17:17.157 fused_ordering(380) 00:17:17.157 fused_ordering(381) 00:17:17.157 fused_ordering(382) 00:17:17.157 fused_ordering(383) 00:17:17.157 fused_ordering(384) 00:17:17.157 fused_ordering(385) 00:17:17.157 fused_ordering(386) 00:17:17.157 fused_ordering(387) 00:17:17.157 fused_ordering(388) 00:17:17.157 fused_ordering(389) 00:17:17.157 fused_ordering(390) 00:17:17.157 fused_ordering(391) 00:17:17.157 fused_ordering(392) 00:17:17.157 fused_ordering(393) 00:17:17.157 fused_ordering(394) 00:17:17.157 fused_ordering(395) 00:17:17.157 fused_ordering(396) 00:17:17.157 fused_ordering(397) 00:17:17.157 fused_ordering(398) 00:17:17.157 fused_ordering(399) 00:17:17.157 fused_ordering(400) 00:17:17.157 fused_ordering(401) 00:17:17.157 fused_ordering(402) 00:17:17.157 fused_ordering(403) 00:17:17.157 fused_ordering(404) 00:17:17.157 fused_ordering(405) 00:17:17.157 fused_ordering(406) 00:17:17.157 fused_ordering(407) 00:17:17.157 fused_ordering(408) 00:17:17.157 fused_ordering(409) 00:17:17.157 fused_ordering(410) 00:17:17.416 fused_ordering(411) 00:17:17.416 fused_ordering(412) 00:17:17.416 fused_ordering(413) 00:17:17.416 fused_ordering(414) 00:17:17.416 fused_ordering(415) 00:17:17.416 fused_ordering(416) 00:17:17.416 fused_ordering(417) 00:17:17.416 fused_ordering(418) 00:17:17.416 fused_ordering(419) 00:17:17.416 fused_ordering(420) 00:17:17.416 fused_ordering(421) 00:17:17.416 fused_ordering(422) 00:17:17.416 fused_ordering(423) 00:17:17.416 fused_ordering(424) 00:17:17.416 fused_ordering(425) 00:17:17.416 fused_ordering(426) 00:17:17.416 fused_ordering(427) 00:17:17.416 fused_ordering(428) 00:17:17.416 fused_ordering(429) 00:17:17.416 fused_ordering(430) 00:17:17.416 fused_ordering(431) 00:17:17.416 fused_ordering(432) 00:17:17.416 fused_ordering(433) 00:17:17.416 fused_ordering(434) 00:17:17.416 fused_ordering(435) 00:17:17.416 fused_ordering(436) 00:17:17.416 fused_ordering(437) 00:17:17.416 fused_ordering(438) 00:17:17.416 fused_ordering(439) 00:17:17.416 fused_ordering(440) 00:17:17.416 fused_ordering(441) 00:17:17.416 fused_ordering(442) 00:17:17.416 fused_ordering(443) 00:17:17.416 fused_ordering(444) 00:17:17.416 fused_ordering(445) 00:17:17.416 fused_ordering(446) 00:17:17.416 fused_ordering(447) 00:17:17.416 fused_ordering(448) 00:17:17.416 fused_ordering(449) 00:17:17.416 fused_ordering(450) 00:17:17.416 fused_ordering(451) 00:17:17.416 fused_ordering(452) 00:17:17.416 fused_ordering(453) 00:17:17.416 fused_ordering(454) 00:17:17.416 fused_ordering(455) 00:17:17.416 fused_ordering(456) 00:17:17.416 fused_ordering(457) 00:17:17.416 fused_ordering(458) 00:17:17.416 fused_ordering(459) 00:17:17.416 fused_ordering(460) 00:17:17.416 fused_ordering(461) 00:17:17.416 fused_ordering(462) 00:17:17.416 fused_ordering(463) 00:17:17.416 fused_ordering(464) 00:17:17.416 fused_ordering(465) 00:17:17.416 fused_ordering(466) 00:17:17.416 fused_ordering(467) 00:17:17.416 fused_ordering(468) 00:17:17.416 fused_ordering(469) 00:17:17.416 fused_ordering(470) 00:17:17.416 fused_ordering(471) 00:17:17.416 fused_ordering(472) 00:17:17.416 fused_ordering(473) 00:17:17.416 fused_ordering(474) 00:17:17.416 fused_ordering(475) 00:17:17.416 fused_ordering(476) 00:17:17.416 fused_ordering(477) 00:17:17.416 fused_ordering(478) 00:17:17.416 fused_ordering(479) 00:17:17.416 fused_ordering(480) 00:17:17.416 fused_ordering(481) 00:17:17.416 fused_ordering(482) 00:17:17.416 fused_ordering(483) 00:17:17.416 fused_ordering(484) 00:17:17.416 fused_ordering(485) 00:17:17.416 fused_ordering(486) 00:17:17.416 fused_ordering(487) 00:17:17.416 fused_ordering(488) 00:17:17.416 fused_ordering(489) 00:17:17.416 fused_ordering(490) 00:17:17.416 fused_ordering(491) 00:17:17.416 fused_ordering(492) 00:17:17.416 fused_ordering(493) 00:17:17.416 fused_ordering(494) 00:17:17.416 fused_ordering(495) 00:17:17.416 fused_ordering(496) 00:17:17.416 fused_ordering(497) 00:17:17.416 fused_ordering(498) 00:17:17.416 fused_ordering(499) 00:17:17.416 fused_ordering(500) 00:17:17.416 fused_ordering(501) 00:17:17.416 fused_ordering(502) 00:17:17.416 fused_ordering(503) 00:17:17.416 fused_ordering(504) 00:17:17.416 fused_ordering(505) 00:17:17.416 fused_ordering(506) 00:17:17.416 fused_ordering(507) 00:17:17.416 fused_ordering(508) 00:17:17.416 fused_ordering(509) 00:17:17.416 fused_ordering(510) 00:17:17.416 fused_ordering(511) 00:17:17.416 fused_ordering(512) 00:17:17.416 fused_ordering(513) 00:17:17.416 fused_ordering(514) 00:17:17.416 fused_ordering(515) 00:17:17.416 fused_ordering(516) 00:17:17.416 fused_ordering(517) 00:17:17.416 fused_ordering(518) 00:17:17.416 fused_ordering(519) 00:17:17.416 fused_ordering(520) 00:17:17.416 fused_ordering(521) 00:17:17.416 fused_ordering(522) 00:17:17.416 fused_ordering(523) 00:17:17.416 fused_ordering(524) 00:17:17.416 fused_ordering(525) 00:17:17.416 fused_ordering(526) 00:17:17.416 fused_ordering(527) 00:17:17.416 fused_ordering(528) 00:17:17.416 fused_ordering(529) 00:17:17.416 fused_ordering(530) 00:17:17.416 fused_ordering(531) 00:17:17.416 fused_ordering(532) 00:17:17.416 fused_ordering(533) 00:17:17.416 fused_ordering(534) 00:17:17.416 fused_ordering(535) 00:17:17.416 fused_ordering(536) 00:17:17.416 fused_ordering(537) 00:17:17.416 fused_ordering(538) 00:17:17.416 fused_ordering(539) 00:17:17.416 fused_ordering(540) 00:17:17.416 fused_ordering(541) 00:17:17.416 fused_ordering(542) 00:17:17.416 fused_ordering(543) 00:17:17.416 fused_ordering(544) 00:17:17.416 fused_ordering(545) 00:17:17.416 fused_ordering(546) 00:17:17.416 fused_ordering(547) 00:17:17.416 fused_ordering(548) 00:17:17.416 fused_ordering(549) 00:17:17.416 fused_ordering(550) 00:17:17.416 fused_ordering(551) 00:17:17.416 fused_ordering(552) 00:17:17.416 fused_ordering(553) 00:17:17.416 fused_ordering(554) 00:17:17.416 fused_ordering(555) 00:17:17.416 fused_ordering(556) 00:17:17.416 fused_ordering(557) 00:17:17.416 fused_ordering(558) 00:17:17.416 fused_ordering(559) 00:17:17.416 fused_ordering(560) 00:17:17.416 fused_ordering(561) 00:17:17.416 fused_ordering(562) 00:17:17.416 fused_ordering(563) 00:17:17.416 fused_ordering(564) 00:17:17.416 fused_ordering(565) 00:17:17.416 fused_ordering(566) 00:17:17.416 fused_ordering(567) 00:17:17.416 fused_ordering(568) 00:17:17.416 fused_ordering(569) 00:17:17.416 fused_ordering(570) 00:17:17.416 fused_ordering(571) 00:17:17.416 fused_ordering(572) 00:17:17.416 fused_ordering(573) 00:17:17.416 fused_ordering(574) 00:17:17.416 fused_ordering(575) 00:17:17.416 fused_ordering(576) 00:17:17.416 fused_ordering(577) 00:17:17.416 fused_ordering(578) 00:17:17.416 fused_ordering(579) 00:17:17.416 fused_ordering(580) 00:17:17.416 fused_ordering(581) 00:17:17.416 fused_ordering(582) 00:17:17.416 fused_ordering(583) 00:17:17.416 fused_ordering(584) 00:17:17.416 fused_ordering(585) 00:17:17.416 fused_ordering(586) 00:17:17.416 fused_ordering(587) 00:17:17.416 fused_ordering(588) 00:17:17.416 fused_ordering(589) 00:17:17.416 fused_ordering(590) 00:17:17.416 fused_ordering(591) 00:17:17.416 fused_ordering(592) 00:17:17.416 fused_ordering(593) 00:17:17.416 fused_ordering(594) 00:17:17.416 fused_ordering(595) 00:17:17.416 fused_ordering(596) 00:17:17.416 fused_ordering(597) 00:17:17.416 fused_ordering(598) 00:17:17.416 fused_ordering(599) 00:17:17.416 fused_ordering(600) 00:17:17.416 fused_ordering(601) 00:17:17.416 fused_ordering(602) 00:17:17.416 fused_ordering(603) 00:17:17.416 fused_ordering(604) 00:17:17.416 fused_ordering(605) 00:17:17.416 fused_ordering(606) 00:17:17.416 fused_ordering(607) 00:17:17.416 fused_ordering(608) 00:17:17.416 fused_ordering(609) 00:17:17.416 fused_ordering(610) 00:17:17.416 fused_ordering(611) 00:17:17.416 fused_ordering(612) 00:17:17.416 fused_ordering(613) 00:17:17.416 fused_ordering(614) 00:17:17.416 fused_ordering(615) 00:17:17.675 fused_ordering(616) 00:17:17.675 fused_ordering(617) 00:17:17.675 fused_ordering(618) 00:17:17.675 fused_ordering(619) 00:17:17.675 fused_ordering(620) 00:17:17.675 fused_ordering(621) 00:17:17.675 fused_ordering(622) 00:17:17.675 fused_ordering(623) 00:17:17.675 fused_ordering(624) 00:17:17.675 fused_ordering(625) 00:17:17.675 fused_ordering(626) 00:17:17.675 fused_ordering(627) 00:17:17.675 fused_ordering(628) 00:17:17.675 fused_ordering(629) 00:17:17.675 fused_ordering(630) 00:17:17.675 fused_ordering(631) 00:17:17.675 fused_ordering(632) 00:17:17.675 fused_ordering(633) 00:17:17.675 fused_ordering(634) 00:17:17.675 fused_ordering(635) 00:17:17.675 fused_ordering(636) 00:17:17.675 fused_ordering(637) 00:17:17.675 fused_ordering(638) 00:17:17.675 fused_ordering(639) 00:17:17.675 fused_ordering(640) 00:17:17.675 fused_ordering(641) 00:17:17.675 fused_ordering(642) 00:17:17.675 fused_ordering(643) 00:17:17.675 fused_ordering(644) 00:17:17.675 fused_ordering(645) 00:17:17.675 fused_ordering(646) 00:17:17.675 fused_ordering(647) 00:17:17.675 fused_ordering(648) 00:17:17.675 fused_ordering(649) 00:17:17.675 fused_ordering(650) 00:17:17.675 fused_ordering(651) 00:17:17.675 fused_ordering(652) 00:17:17.675 fused_ordering(653) 00:17:17.675 fused_ordering(654) 00:17:17.675 fused_ordering(655) 00:17:17.675 fused_ordering(656) 00:17:17.675 fused_ordering(657) 00:17:17.675 fused_ordering(658) 00:17:17.675 fused_ordering(659) 00:17:17.675 fused_ordering(660) 00:17:17.675 fused_ordering(661) 00:17:17.675 fused_ordering(662) 00:17:17.675 fused_ordering(663) 00:17:17.675 fused_ordering(664) 00:17:17.675 fused_ordering(665) 00:17:17.675 fused_ordering(666) 00:17:17.675 fused_ordering(667) 00:17:17.675 fused_ordering(668) 00:17:17.675 fused_ordering(669) 00:17:17.675 fused_ordering(670) 00:17:17.675 fused_ordering(671) 00:17:17.675 fused_ordering(672) 00:17:17.675 fused_ordering(673) 00:17:17.675 fused_ordering(674) 00:17:17.675 fused_ordering(675) 00:17:17.675 fused_ordering(676) 00:17:17.675 fused_ordering(677) 00:17:17.675 fused_ordering(678) 00:17:17.675 fused_ordering(679) 00:17:17.675 fused_ordering(680) 00:17:17.675 fused_ordering(681) 00:17:17.675 fused_ordering(682) 00:17:17.675 fused_ordering(683) 00:17:17.675 fused_ordering(684) 00:17:17.675 fused_ordering(685) 00:17:17.675 fused_ordering(686) 00:17:17.675 fused_ordering(687) 00:17:17.675 fused_ordering(688) 00:17:17.675 fused_ordering(689) 00:17:17.675 fused_ordering(690) 00:17:17.675 fused_ordering(691) 00:17:17.675 fused_ordering(692) 00:17:17.675 fused_ordering(693) 00:17:17.675 fused_ordering(694) 00:17:17.675 fused_ordering(695) 00:17:17.675 fused_ordering(696) 00:17:17.675 fused_ordering(697) 00:17:17.675 fused_ordering(698) 00:17:17.675 fused_ordering(699) 00:17:17.675 fused_ordering(700) 00:17:17.675 fused_ordering(701) 00:17:17.675 fused_ordering(702) 00:17:17.675 fused_ordering(703) 00:17:17.675 fused_ordering(704) 00:17:17.675 fused_ordering(705) 00:17:17.675 fused_ordering(706) 00:17:17.675 fused_ordering(707) 00:17:17.675 fused_ordering(708) 00:17:17.675 fused_ordering(709) 00:17:17.675 fused_ordering(710) 00:17:17.675 fused_ordering(711) 00:17:17.675 fused_ordering(712) 00:17:17.675 fused_ordering(713) 00:17:17.675 fused_ordering(714) 00:17:17.675 fused_ordering(715) 00:17:17.675 fused_ordering(716) 00:17:17.675 fused_ordering(717) 00:17:17.675 fused_ordering(718) 00:17:17.675 fused_ordering(719) 00:17:17.675 fused_ordering(720) 00:17:17.675 fused_ordering(721) 00:17:17.675 fused_ordering(722) 00:17:17.675 fused_ordering(723) 00:17:17.675 fused_ordering(724) 00:17:17.675 fused_ordering(725) 00:17:17.675 fused_ordering(726) 00:17:17.675 fused_ordering(727) 00:17:17.675 fused_ordering(728) 00:17:17.675 fused_ordering(729) 00:17:17.675 fused_ordering(730) 00:17:17.675 fused_ordering(731) 00:17:17.675 fused_ordering(732) 00:17:17.675 fused_ordering(733) 00:17:17.675 fused_ordering(734) 00:17:17.675 fused_ordering(735) 00:17:17.675 fused_ordering(736) 00:17:17.676 fused_ordering(737) 00:17:17.676 fused_ordering(738) 00:17:17.676 fused_ordering(739) 00:17:17.676 fused_ordering(740) 00:17:17.676 fused_ordering(741) 00:17:17.676 fused_ordering(742) 00:17:17.676 fused_ordering(743) 00:17:17.676 fused_ordering(744) 00:17:17.676 fused_ordering(745) 00:17:17.676 fused_ordering(746) 00:17:17.676 fused_ordering(747) 00:17:17.676 fused_ordering(748) 00:17:17.676 fused_ordering(749) 00:17:17.676 fused_ordering(750) 00:17:17.676 fused_ordering(751) 00:17:17.676 fused_ordering(752) 00:17:17.676 fused_ordering(753) 00:17:17.676 fused_ordering(754) 00:17:17.676 fused_ordering(755) 00:17:17.676 fused_ordering(756) 00:17:17.676 fused_ordering(757) 00:17:17.676 fused_ordering(758) 00:17:17.676 fused_ordering(759) 00:17:17.676 fused_ordering(760) 00:17:17.676 fused_ordering(761) 00:17:17.676 fused_ordering(762) 00:17:17.676 fused_ordering(763) 00:17:17.676 fused_ordering(764) 00:17:17.676 fused_ordering(765) 00:17:17.676 fused_ordering(766) 00:17:17.676 fused_ordering(767) 00:17:17.676 fused_ordering(768) 00:17:17.676 fused_ordering(769) 00:17:17.676 fused_ordering(770) 00:17:17.676 fused_ordering(771) 00:17:17.676 fused_ordering(772) 00:17:17.676 fused_ordering(773) 00:17:17.676 fused_ordering(774) 00:17:17.676 fused_ordering(775) 00:17:17.676 fused_ordering(776) 00:17:17.676 fused_ordering(777) 00:17:17.676 fused_ordering(778) 00:17:17.676 fused_ordering(779) 00:17:17.676 fused_ordering(780) 00:17:17.676 fused_ordering(781) 00:17:17.676 fused_ordering(782) 00:17:17.676 fused_ordering(783) 00:17:17.676 fused_ordering(784) 00:17:17.676 fused_ordering(785) 00:17:17.676 fused_ordering(786) 00:17:17.676 fused_ordering(787) 00:17:17.676 fused_ordering(788) 00:17:17.676 fused_ordering(789) 00:17:17.676 fused_ordering(790) 00:17:17.676 fused_ordering(791) 00:17:17.676 fused_ordering(792) 00:17:17.676 fused_ordering(793) 00:17:17.676 fused_ordering(794) 00:17:17.676 fused_ordering(795) 00:17:17.676 fused_ordering(796) 00:17:17.676 fused_ordering(797) 00:17:17.676 fused_ordering(798) 00:17:17.676 fused_ordering(799) 00:17:17.676 fused_ordering(800) 00:17:17.676 fused_ordering(801) 00:17:17.676 fused_ordering(802) 00:17:17.676 fused_ordering(803) 00:17:17.676 fused_ordering(804) 00:17:17.676 fused_ordering(805) 00:17:17.676 fused_ordering(806) 00:17:17.676 fused_ordering(807) 00:17:17.676 fused_ordering(808) 00:17:17.676 fused_ordering(809) 00:17:17.676 fused_ordering(810) 00:17:17.676 fused_ordering(811) 00:17:17.676 fused_ordering(812) 00:17:17.676 fused_ordering(813) 00:17:17.676 fused_ordering(814) 00:17:17.676 fused_ordering(815) 00:17:17.676 fused_ordering(816) 00:17:17.676 fused_ordering(817) 00:17:17.676 fused_ordering(818) 00:17:17.676 fused_ordering(819) 00:17:17.676 fused_ordering(820) 00:17:18.243 fused_o[2024-12-14 16:30:48.209222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f9a10 is same with the state(6) to be set 00:17:18.243 rdering(821) 00:17:18.243 fused_ordering(822) 00:17:18.243 fused_ordering(823) 00:17:18.243 fused_ordering(824) 00:17:18.243 fused_ordering(825) 00:17:18.243 fused_ordering(826) 00:17:18.243 fused_ordering(827) 00:17:18.243 fused_ordering(828) 00:17:18.243 fused_ordering(829) 00:17:18.243 fused_ordering(830) 00:17:18.243 fused_ordering(831) 00:17:18.243 fused_ordering(832) 00:17:18.243 fused_ordering(833) 00:17:18.243 fused_ordering(834) 00:17:18.243 fused_ordering(835) 00:17:18.243 fused_ordering(836) 00:17:18.243 fused_ordering(837) 00:17:18.243 fused_ordering(838) 00:17:18.243 fused_ordering(839) 00:17:18.243 fused_ordering(840) 00:17:18.243 fused_ordering(841) 00:17:18.243 fused_ordering(842) 00:17:18.243 fused_ordering(843) 00:17:18.243 fused_ordering(844) 00:17:18.243 fused_ordering(845) 00:17:18.243 fused_ordering(846) 00:17:18.243 fused_ordering(847) 00:17:18.243 fused_ordering(848) 00:17:18.243 fused_ordering(849) 00:17:18.243 fused_ordering(850) 00:17:18.243 fused_ordering(851) 00:17:18.243 fused_ordering(852) 00:17:18.243 fused_ordering(853) 00:17:18.243 fused_ordering(854) 00:17:18.243 fused_ordering(855) 00:17:18.243 fused_ordering(856) 00:17:18.243 fused_ordering(857) 00:17:18.243 fused_ordering(858) 00:17:18.243 fused_ordering(859) 00:17:18.243 fused_ordering(860) 00:17:18.243 fused_ordering(861) 00:17:18.243 fused_ordering(862) 00:17:18.243 fused_ordering(863) 00:17:18.243 fused_ordering(864) 00:17:18.243 fused_ordering(865) 00:17:18.243 fused_ordering(866) 00:17:18.243 fused_ordering(867) 00:17:18.243 fused_ordering(868) 00:17:18.243 fused_ordering(869) 00:17:18.243 fused_ordering(870) 00:17:18.243 fused_ordering(871) 00:17:18.243 fused_ordering(872) 00:17:18.243 fused_ordering(873) 00:17:18.243 fused_ordering(874) 00:17:18.243 fused_ordering(875) 00:17:18.243 fused_ordering(876) 00:17:18.243 fused_ordering(877) 00:17:18.243 fused_ordering(878) 00:17:18.243 fused_ordering(879) 00:17:18.243 fused_ordering(880) 00:17:18.243 fused_ordering(881) 00:17:18.243 fused_ordering(882) 00:17:18.243 fused_ordering(883) 00:17:18.243 fused_ordering(884) 00:17:18.243 fused_ordering(885) 00:17:18.243 fused_ordering(886) 00:17:18.243 fused_ordering(887) 00:17:18.243 fused_ordering(888) 00:17:18.243 fused_ordering(889) 00:17:18.243 fused_ordering(890) 00:17:18.243 fused_ordering(891) 00:17:18.243 fused_ordering(892) 00:17:18.243 fused_ordering(893) 00:17:18.243 fused_ordering(894) 00:17:18.243 fused_ordering(895) 00:17:18.243 fused_ordering(896) 00:17:18.243 fused_ordering(897) 00:17:18.243 fused_ordering(898) 00:17:18.243 fused_ordering(899) 00:17:18.243 fused_ordering(900) 00:17:18.243 fused_ordering(901) 00:17:18.243 fused_ordering(902) 00:17:18.243 fused_ordering(903) 00:17:18.243 fused_ordering(904) 00:17:18.243 fused_ordering(905) 00:17:18.243 fused_ordering(906) 00:17:18.243 fused_ordering(907) 00:17:18.243 fused_ordering(908) 00:17:18.243 fused_ordering(909) 00:17:18.243 fused_ordering(910) 00:17:18.243 fused_ordering(911) 00:17:18.243 fused_ordering(912) 00:17:18.243 fused_ordering(913) 00:17:18.243 fused_ordering(914) 00:17:18.243 fused_ordering(915) 00:17:18.243 fused_ordering(916) 00:17:18.243 fused_ordering(917) 00:17:18.243 fused_ordering(918) 00:17:18.243 fused_ordering(919) 00:17:18.243 fused_ordering(920) 00:17:18.243 fused_ordering(921) 00:17:18.243 fused_ordering(922) 00:17:18.243 fused_ordering(923) 00:17:18.243 fused_ordering(924) 00:17:18.243 fused_ordering(925) 00:17:18.243 fused_ordering(926) 00:17:18.243 fused_ordering(927) 00:17:18.243 fused_ordering(928) 00:17:18.243 fused_ordering(929) 00:17:18.243 fused_ordering(930) 00:17:18.243 fused_ordering(931) 00:17:18.243 fused_ordering(932) 00:17:18.243 fused_ordering(933) 00:17:18.243 fused_ordering(934) 00:17:18.243 fused_ordering(935) 00:17:18.243 fused_ordering(936) 00:17:18.243 fused_ordering(937) 00:17:18.243 fused_ordering(938) 00:17:18.243 fused_ordering(939) 00:17:18.243 fused_ordering(940) 00:17:18.243 fused_ordering(941) 00:17:18.243 fused_ordering(942) 00:17:18.243 fused_ordering(943) 00:17:18.243 fused_ordering(944) 00:17:18.243 fused_ordering(945) 00:17:18.243 fused_ordering(946) 00:17:18.243 fused_ordering(947) 00:17:18.243 fused_ordering(948) 00:17:18.243 fused_ordering(949) 00:17:18.243 fused_ordering(950) 00:17:18.243 fused_ordering(951) 00:17:18.243 fused_ordering(952) 00:17:18.243 fused_ordering(953) 00:17:18.243 fused_ordering(954) 00:17:18.243 fused_ordering(955) 00:17:18.243 fused_ordering(956) 00:17:18.243 fused_ordering(957) 00:17:18.243 fused_ordering(958) 00:17:18.243 fused_ordering(959) 00:17:18.243 fused_ordering(960) 00:17:18.243 fused_ordering(961) 00:17:18.243 fused_ordering(962) 00:17:18.243 fused_ordering(963) 00:17:18.243 fused_ordering(964) 00:17:18.243 fused_ordering(965) 00:17:18.243 fused_ordering(966) 00:17:18.243 fused_ordering(967) 00:17:18.243 fused_ordering(968) 00:17:18.243 fused_ordering(969) 00:17:18.243 fused_ordering(970) 00:17:18.243 fused_ordering(971) 00:17:18.243 fused_ordering(972) 00:17:18.243 fused_ordering(973) 00:17:18.243 fused_ordering(974) 00:17:18.243 fused_ordering(975) 00:17:18.243 fused_ordering(976) 00:17:18.243 fused_ordering(977) 00:17:18.243 fused_ordering(978) 00:17:18.243 fused_ordering(979) 00:17:18.243 fused_ordering(980) 00:17:18.243 fused_ordering(981) 00:17:18.243 fused_ordering(982) 00:17:18.243 fused_ordering(983) 00:17:18.243 fused_ordering(984) 00:17:18.243 fused_ordering(985) 00:17:18.243 fused_ordering(986) 00:17:18.243 fused_ordering(987) 00:17:18.243 fused_ordering(988) 00:17:18.243 fused_ordering(989) 00:17:18.243 fused_ordering(990) 00:17:18.243 fused_ordering(991) 00:17:18.243 fused_ordering(992) 00:17:18.243 fused_ordering(993) 00:17:18.243 fused_ordering(994) 00:17:18.243 fused_ordering(995) 00:17:18.243 fused_ordering(996) 00:17:18.243 fused_ordering(997) 00:17:18.243 fused_ordering(998) 00:17:18.243 fused_ordering(999) 00:17:18.243 fused_ordering(1000) 00:17:18.243 fused_ordering(1001) 00:17:18.243 fused_ordering(1002) 00:17:18.243 fused_ordering(1003) 00:17:18.243 fused_ordering(1004) 00:17:18.243 fused_ordering(1005) 00:17:18.243 fused_ordering(1006) 00:17:18.243 fused_ordering(1007) 00:17:18.243 fused_ordering(1008) 00:17:18.243 fused_ordering(1009) 00:17:18.243 fused_ordering(1010) 00:17:18.243 fused_ordering(1011) 00:17:18.243 fused_ordering(1012) 00:17:18.243 fused_ordering(1013) 00:17:18.243 fused_ordering(1014) 00:17:18.243 fused_ordering(1015) 00:17:18.243 fused_ordering(1016) 00:17:18.243 fused_ordering(1017) 00:17:18.243 fused_ordering(1018) 00:17:18.243 fused_ordering(1019) 00:17:18.243 fused_ordering(1020) 00:17:18.243 fused_ordering(1021) 00:17:18.243 fused_ordering(1022) 00:17:18.243 fused_ordering(1023) 00:17:18.243 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:18.243 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:18.243 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:18.243 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:18.243 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:18.244 rmmod nvme_tcp 00:17:18.244 rmmod nvme_fabrics 00:17:18.244 rmmod nvme_keyring 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 953607 ']' 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 953607 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 953607 ']' 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 953607 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.244 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 953607 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 953607' 00:17:18.503 killing process with pid 953607 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 953607 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 953607 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.503 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:21.039 00:17:21.039 real 0m10.550s 00:17:21.039 user 0m4.890s 00:17:21.039 sys 0m5.741s 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:21.039 ************************************ 00:17:21.039 END TEST nvmf_fused_ordering 00:17:21.039 ************************************ 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:21.039 ************************************ 00:17:21.039 START TEST nvmf_ns_masking 00:17:21.039 ************************************ 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:21.039 * Looking for test storage... 00:17:21.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:21.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.039 --rc genhtml_branch_coverage=1 00:17:21.039 --rc genhtml_function_coverage=1 00:17:21.039 --rc genhtml_legend=1 00:17:21.039 --rc geninfo_all_blocks=1 00:17:21.039 --rc geninfo_unexecuted_blocks=1 00:17:21.039 00:17:21.039 ' 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:21.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.039 --rc genhtml_branch_coverage=1 00:17:21.039 --rc genhtml_function_coverage=1 00:17:21.039 --rc genhtml_legend=1 00:17:21.039 --rc geninfo_all_blocks=1 00:17:21.039 --rc geninfo_unexecuted_blocks=1 00:17:21.039 00:17:21.039 ' 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:21.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.039 --rc genhtml_branch_coverage=1 00:17:21.039 --rc genhtml_function_coverage=1 00:17:21.039 --rc genhtml_legend=1 00:17:21.039 --rc geninfo_all_blocks=1 00:17:21.039 --rc geninfo_unexecuted_blocks=1 00:17:21.039 00:17:21.039 ' 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:21.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.039 --rc genhtml_branch_coverage=1 00:17:21.039 --rc genhtml_function_coverage=1 00:17:21.039 --rc genhtml_legend=1 00:17:21.039 --rc geninfo_all_blocks=1 00:17:21.039 --rc geninfo_unexecuted_blocks=1 00:17:21.039 00:17:21.039 ' 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.039 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=80344370-86c0-4bd9-af83-dbd7df046761 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=70956bcc-8a7a-473d-835b-df6f70ca8d0c 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3fa2d7f1-09b2-4c1e-9441-a38b6bc42221 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:21.040 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:27.614 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:27.614 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.614 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:27.615 Found net devices under 0000:af:00.0: cvl_0_0 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:27.615 Found net devices under 0000:af:00.1: cvl_0_1 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:27.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:17:27.615 00:17:27.615 --- 10.0.0.2 ping statistics --- 00:17:27.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.615 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:17:27.615 00:17:27.615 --- 10.0.0.1 ping statistics --- 00:17:27.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.615 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=957521 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 957521 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 957521 ']' 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.615 16:30:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:27.615 [2024-12-14 16:30:56.901853] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:27.615 [2024-12-14 16:30:56.901899] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.615 [2024-12-14 16:30:56.977299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.615 [2024-12-14 16:30:56.998551] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.615 [2024-12-14 16:30:56.998591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.615 [2024-12-14 16:30:56.998598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.615 [2024-12-14 16:30:56.998604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.615 [2024-12-14 16:30:56.998609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.615 [2024-12-14 16:30:56.999131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.615 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.615 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:27.615 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:27.615 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:27.615 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:27.615 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.615 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:27.615 [2024-12-14 16:30:57.298347] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.615 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:27.615 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:27.615 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:27.615 Malloc1 00:17:27.615 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:27.874 Malloc2 00:17:27.874 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:27.874 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:28.132 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.390 [2024-12-14 16:30:58.320773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.390 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:28.390 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3fa2d7f1-09b2-4c1e-9441-a38b6bc42221 -a 10.0.0.2 -s 4420 -i 4 00:17:28.649 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:28.649 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:28.649 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.649 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:28.649 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:30.549 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:30.549 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:30.549 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:30.549 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:30.549 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.549 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:30.549 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:30.549 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:30.807 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:30.807 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:30.807 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:30.807 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:30.807 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:30.807 [ 0]:0x1 00:17:30.807 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:30.807 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:30.807 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10504b3ed55f406aa0cdc49b9341fb23 00:17:30.807 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10504b3ed55f406aa0cdc49b9341fb23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:30.807 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:31.065 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:31.065 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.065 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:31.065 [ 0]:0x1 00:17:31.065 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:31.065 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.065 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10504b3ed55f406aa0cdc49b9341fb23 00:17:31.065 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10504b3ed55f406aa0cdc49b9341fb23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.065 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:31.065 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.065 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:31.065 [ 1]:0x2 00:17:31.065 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:31.065 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.065 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ed0c03b24f74165ad78e98e0fb3f704 00:17:31.066 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ed0c03b24f74165ad78e98e0fb3f704 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.066 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:31.066 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.066 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.324 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:31.582 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:31.582 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3fa2d7f1-09b2-4c1e-9441-a38b6bc42221 -a 10.0.0.2 -s 4420 -i 4 00:17:31.840 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:31.840 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:31.840 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:31.840 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:31.840 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:31.840 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:33.740 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:33.998 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:33.998 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:33.998 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:33.998 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.999 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.999 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.999 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:33.999 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:33.999 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:33.999 [ 0]:0x2 00:17:33.999 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:33.999 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:33.999 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ed0c03b24f74165ad78e98e0fb3f704 00:17:33.999 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ed0c03b24f74165ad78e98e0fb3f704 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:33.999 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:34.257 [ 0]:0x1 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10504b3ed55f406aa0cdc49b9341fb23 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10504b3ed55f406aa0cdc49b9341fb23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.257 [ 1]:0x2 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ed0c03b24f74165ad78e98e0fb3f704 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ed0c03b24f74165ad78e98e0fb3f704 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.257 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:34.548 [ 0]:0x2 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ed0c03b24f74165ad78e98e0fb3f704 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ed0c03b24f74165ad78e98e0fb3f704 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:34.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.548 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:34.806 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:34.806 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3fa2d7f1-09b2-4c1e-9441-a38b6bc42221 -a 10.0.0.2 -s 4420 -i 4 00:17:34.806 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:34.806 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:34.806 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.806 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:34.806 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:34.806 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:37.335 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:37.335 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:37.335 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:37.335 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:37.335 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:37.335 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:37.335 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:37.335 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:37.335 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:37.335 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:37.335 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:37.336 [ 0]:0x1 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10504b3ed55f406aa0cdc49b9341fb23 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10504b3ed55f406aa0cdc49b9341fb23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:37.336 [ 1]:0x2 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ed0c03b24f74165ad78e98e0fb3f704 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ed0c03b24f74165ad78e98e0fb3f704 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:37.336 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:37.594 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:37.595 [ 0]:0x2 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ed0c03b24f74165ad78e98e0fb3f704 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ed0c03b24f74165ad78e98e0fb3f704 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:37.595 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:37.853 [2024-12-14 16:31:07.731921] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:37.853 request: 00:17:37.853 { 00:17:37.853 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.853 "nsid": 2, 00:17:37.853 "host": "nqn.2016-06.io.spdk:host1", 00:17:37.853 "method": "nvmf_ns_remove_host", 00:17:37.853 "req_id": 1 00:17:37.853 } 00:17:37.853 Got JSON-RPC error response 00:17:37.853 response: 00:17:37.853 { 00:17:37.853 "code": -32602, 00:17:37.853 "message": "Invalid parameters" 00:17:37.853 } 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:37.853 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:37.854 [ 0]:0x2 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ed0c03b24f74165ad78e98e0fb3f704 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ed0c03b24f74165ad78e98e0fb3f704 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:37.854 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:38.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.112 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=959482 00:17:38.112 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.112 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 959482 /var/tmp/host.sock 00:17:38.112 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:38.112 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 959482 ']' 00:17:38.112 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:38.112 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.112 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:38.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:38.112 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.112 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.112 [2024-12-14 16:31:08.103714] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:38.112 [2024-12-14 16:31:08.103757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959482 ] 00:17:38.112 [2024-12-14 16:31:08.178857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.371 [2024-12-14 16:31:08.201066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.371 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.371 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:38.371 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:38.629 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:38.887 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 80344370-86c0-4bd9-af83-dbd7df046761 00:17:38.887 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:38.887 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8034437086C04BD9AF83DBD7DF046761 -i 00:17:39.145 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 70956bcc-8a7a-473d-835b-df6f70ca8d0c 00:17:39.145 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:39.145 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 70956BCC8A7A473D835BDF6F70CA8D0C -i 00:17:39.145 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:39.403 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:39.661 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:39.661 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:39.919 nvme0n1 00:17:39.919 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:39.919 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:40.178 nvme1n2 00:17:40.178 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:40.178 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:40.178 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:40.178 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:40.178 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:40.436 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:40.436 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:40.436 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:40.436 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:40.694 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 80344370-86c0-4bd9-af83-dbd7df046761 == \8\0\3\4\4\3\7\0\-\8\6\c\0\-\4\b\d\9\-\a\f\8\3\-\d\b\d\7\d\f\0\4\6\7\6\1 ]] 00:17:40.694 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:40.694 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:40.694 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:40.953 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 70956bcc-8a7a-473d-835b-df6f70ca8d0c == \7\0\9\5\6\b\c\c\-\8\a\7\a\-\4\7\3\d\-\8\3\5\b\-\d\f\6\f\7\0\c\a\8\d\0\c ]] 00:17:40.953 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 80344370-86c0-4bd9-af83-dbd7df046761 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8034437086C04BD9AF83DBD7DF046761 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8034437086C04BD9AF83DBD7DF046761 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:41.211 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8034437086C04BD9AF83DBD7DF046761 00:17:41.469 [2024-12-14 16:31:11.434089] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:41.469 [2024-12-14 16:31:11.434124] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:41.469 [2024-12-14 16:31:11.434132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.469 request: 00:17:41.469 { 00:17:41.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.469 "namespace": { 00:17:41.469 "bdev_name": "invalid", 00:17:41.469 "nsid": 1, 00:17:41.469 "nguid": "8034437086C04BD9AF83DBD7DF046761", 00:17:41.469 "no_auto_visible": false, 00:17:41.469 "hide_metadata": false 00:17:41.469 }, 00:17:41.469 "method": "nvmf_subsystem_add_ns", 00:17:41.469 "req_id": 1 00:17:41.469 } 00:17:41.469 Got JSON-RPC error response 00:17:41.469 response: 00:17:41.469 { 00:17:41.469 "code": -32602, 00:17:41.469 "message": "Invalid parameters" 00:17:41.469 } 00:17:41.469 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:41.469 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:41.469 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:41.469 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:41.469 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 80344370-86c0-4bd9-af83-dbd7df046761 00:17:41.469 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:41.469 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8034437086C04BD9AF83DBD7DF046761 -i 00:17:41.728 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:43.628 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:43.628 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:43.628 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 959482 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 959482 ']' 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 959482 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 959482 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 959482' 00:17:43.887 killing process with pid 959482 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 959482 00:17:43.887 16:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 959482 00:17:44.145 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:44.403 rmmod nvme_tcp 00:17:44.403 rmmod nvme_fabrics 00:17:44.403 rmmod nvme_keyring 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 957521 ']' 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 957521 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 957521 ']' 00:17:44.403 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 957521 00:17:44.404 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:44.404 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.404 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 957521 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 957521' 00:17:44.662 killing process with pid 957521 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 957521 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 957521 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.662 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.196 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:47.196 00:17:47.196 real 0m26.136s 00:17:47.196 user 0m31.208s 00:17:47.196 sys 0m6.981s 00:17:47.196 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.196 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:47.196 ************************************ 00:17:47.196 END TEST nvmf_ns_masking 00:17:47.196 ************************************ 00:17:47.196 16:31:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:47.196 16:31:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:47.196 16:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:47.196 16:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.196 16:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:47.196 ************************************ 00:17:47.196 START TEST nvmf_nvme_cli 00:17:47.196 ************************************ 00:17:47.196 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:47.196 * Looking for test storage... 00:17:47.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.196 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:47.196 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:17:47.197 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:47.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.197 --rc genhtml_branch_coverage=1 00:17:47.197 --rc genhtml_function_coverage=1 00:17:47.197 --rc genhtml_legend=1 00:17:47.197 --rc geninfo_all_blocks=1 00:17:47.197 --rc geninfo_unexecuted_blocks=1 00:17:47.197 00:17:47.197 ' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:47.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.197 --rc genhtml_branch_coverage=1 00:17:47.197 --rc genhtml_function_coverage=1 00:17:47.197 --rc genhtml_legend=1 00:17:47.197 --rc geninfo_all_blocks=1 00:17:47.197 --rc geninfo_unexecuted_blocks=1 00:17:47.197 00:17:47.197 ' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:47.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.197 --rc genhtml_branch_coverage=1 00:17:47.197 --rc genhtml_function_coverage=1 00:17:47.197 --rc genhtml_legend=1 00:17:47.197 --rc geninfo_all_blocks=1 00:17:47.197 --rc geninfo_unexecuted_blocks=1 00:17:47.197 00:17:47.197 ' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:47.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.197 --rc genhtml_branch_coverage=1 00:17:47.197 --rc genhtml_function_coverage=1 00:17:47.197 --rc genhtml_legend=1 00:17:47.197 --rc geninfo_all_blocks=1 00:17:47.197 --rc geninfo_unexecuted_blocks=1 00:17:47.197 00:17:47.197 ' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:47.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:47.197 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.198 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.198 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.198 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:47.198 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:47.198 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:47.198 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.767 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:53.768 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:53.768 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:53.768 Found net devices under 0000:af:00.0: cvl_0_0 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:53.768 Found net devices under 0000:af:00.1: cvl_0_1 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:53.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:17:53.768 00:17:53.768 --- 10.0.0.2 ping statistics --- 00:17:53.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.768 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:17:53.768 00:17:53.768 --- 10.0.0.1 ping statistics --- 00:17:53.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.768 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=963897 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 963897 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 963897 ']' 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.768 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.768 [2024-12-14 16:31:23.047471] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:53.768 [2024-12-14 16:31:23.047516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.768 [2024-12-14 16:31:23.129230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.768 [2024-12-14 16:31:23.153017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.768 [2024-12-14 16:31:23.153060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.768 [2024-12-14 16:31:23.153067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.768 [2024-12-14 16:31:23.153072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.769 [2024-12-14 16:31:23.153077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.769 [2024-12-14 16:31:23.154566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.769 [2024-12-14 16:31:23.154597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.769 [2024-12-14 16:31:23.154705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.769 [2024-12-14 16:31:23.154706] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 [2024-12-14 16:31:23.286376] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 Malloc0 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 Malloc1 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 [2024-12-14 16:31:23.382444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:53.769 00:17:53.769 Discovery Log Number of Records 2, Generation counter 2 00:17:53.769 =====Discovery Log Entry 0====== 00:17:53.769 trtype: tcp 00:17:53.769 adrfam: ipv4 00:17:53.769 subtype: current discovery subsystem 00:17:53.769 treq: not required 00:17:53.769 portid: 0 00:17:53.769 trsvcid: 4420 00:17:53.769 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:53.769 traddr: 10.0.0.2 00:17:53.769 eflags: explicit discovery connections, duplicate discovery information 00:17:53.769 sectype: none 00:17:53.769 =====Discovery Log Entry 1====== 00:17:53.769 trtype: tcp 00:17:53.769 adrfam: ipv4 00:17:53.769 subtype: nvme subsystem 00:17:53.769 treq: not required 00:17:53.769 portid: 0 00:17:53.769 trsvcid: 4420 00:17:53.769 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:53.769 traddr: 10.0.0.2 00:17:53.769 eflags: none 00:17:53.769 sectype: none 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:53.769 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:54.705 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:54.705 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:54.705 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:54.705 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:54.705 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:54.705 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:57.237 /dev/nvme0n2 ]] 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.237 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:57.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.238 rmmod nvme_tcp 00:17:57.238 rmmod nvme_fabrics 00:17:57.238 rmmod nvme_keyring 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 963897 ']' 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 963897 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 963897 ']' 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 963897 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.238 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 963897 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 963897' 00:17:57.238 killing process with pid 963897 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 963897 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 963897 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.238 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:59.774 00:17:59.774 real 0m12.463s 00:17:59.774 user 0m17.966s 00:17:59.774 sys 0m5.090s 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.774 ************************************ 00:17:59.774 END TEST nvmf_nvme_cli 00:17:59.774 ************************************ 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:59.774 ************************************ 00:17:59.774 START TEST nvmf_vfio_user 00:17:59.774 ************************************ 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:59.774 * Looking for test storage... 00:17:59.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:59.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.774 --rc genhtml_branch_coverage=1 00:17:59.774 --rc genhtml_function_coverage=1 00:17:59.774 --rc genhtml_legend=1 00:17:59.774 --rc geninfo_all_blocks=1 00:17:59.774 --rc geninfo_unexecuted_blocks=1 00:17:59.774 00:17:59.774 ' 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:59.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.774 --rc genhtml_branch_coverage=1 00:17:59.774 --rc genhtml_function_coverage=1 00:17:59.774 --rc genhtml_legend=1 00:17:59.774 --rc geninfo_all_blocks=1 00:17:59.774 --rc geninfo_unexecuted_blocks=1 00:17:59.774 00:17:59.774 ' 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:59.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.774 --rc genhtml_branch_coverage=1 00:17:59.774 --rc genhtml_function_coverage=1 00:17:59.774 --rc genhtml_legend=1 00:17:59.774 --rc geninfo_all_blocks=1 00:17:59.774 --rc geninfo_unexecuted_blocks=1 00:17:59.774 00:17:59.774 ' 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:59.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.774 --rc genhtml_branch_coverage=1 00:17:59.774 --rc genhtml_function_coverage=1 00:17:59.774 --rc genhtml_legend=1 00:17:59.774 --rc geninfo_all_blocks=1 00:17:59.774 --rc geninfo_unexecuted_blocks=1 00:17:59.774 00:17:59.774 ' 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.774 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:59.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=965144 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 965144' 00:17:59.775 Process pid: 965144 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 965144 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 965144 ']' 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:59.775 [2024-12-14 16:31:29.654134] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:59.775 [2024-12-14 16:31:29.654178] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.775 [2024-12-14 16:31:29.729603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.775 [2024-12-14 16:31:29.752131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.775 [2024-12-14 16:31:29.752170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.775 [2024-12-14 16:31:29.752177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.775 [2024-12-14 16:31:29.752183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.775 [2024-12-14 16:31:29.752187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.775 [2024-12-14 16:31:29.753494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.775 [2024-12-14 16:31:29.753531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.775 [2024-12-14 16:31:29.753652] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.775 [2024-12-14 16:31:29.753653] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:59.775 16:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:01.151 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:01.151 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:01.152 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:01.152 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:01.152 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:01.152 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:01.410 Malloc1 00:18:01.410 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:01.410 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:01.669 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:01.928 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:01.928 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:01.928 16:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:02.187 Malloc2 00:18:02.187 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:02.445 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:02.445 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:02.704 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:02.704 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:02.704 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:02.704 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:02.704 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:02.704 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:02.704 [2024-12-14 16:31:32.724214] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:02.704 [2024-12-14 16:31:32.724248] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid965617 ] 00:18:02.704 [2024-12-14 16:31:32.761010] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:02.705 [2024-12-14 16:31:32.766334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:02.705 [2024-12-14 16:31:32.766353] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8750077000 00:18:02.705 [2024-12-14 16:31:32.767330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:02.705 [2024-12-14 16:31:32.768326] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:02.705 [2024-12-14 16:31:32.769337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:02.705 [2024-12-14 16:31:32.770340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:02.705 [2024-12-14 16:31:32.771343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:02.705 [2024-12-14 16:31:32.772348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:02.705 [2024-12-14 16:31:32.773356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:02.705 [2024-12-14 16:31:32.774360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:02.705 [2024-12-14 16:31:32.775366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:02.705 [2024-12-14 16:31:32.775376] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f874ed81000 00:18:02.705 [2024-12-14 16:31:32.776291] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:02.705 [2024-12-14 16:31:32.789830] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:02.705 [2024-12-14 16:31:32.789859] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:02.965 [2024-12-14 16:31:32.792479] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:02.965 [2024-12-14 16:31:32.792518] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:02.965 [2024-12-14 16:31:32.792601] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:02.965 [2024-12-14 16:31:32.792618] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:02.965 [2024-12-14 16:31:32.792624] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:02.965 [2024-12-14 16:31:32.793477] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:02.965 [2024-12-14 16:31:32.793486] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:02.965 [2024-12-14 16:31:32.793492] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:02.965 [2024-12-14 16:31:32.794479] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:02.965 [2024-12-14 16:31:32.794488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:02.965 [2024-12-14 16:31:32.794494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:02.965 [2024-12-14 16:31:32.795486] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:02.965 [2024-12-14 16:31:32.795494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:02.965 [2024-12-14 16:31:32.796490] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:02.965 [2024-12-14 16:31:32.796497] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:02.965 [2024-12-14 16:31:32.796501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:02.965 [2024-12-14 16:31:32.796507] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:02.965 [2024-12-14 16:31:32.796614] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:02.965 [2024-12-14 16:31:32.796618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:02.965 [2024-12-14 16:31:32.796623] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:02.965 [2024-12-14 16:31:32.797501] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:02.965 [2024-12-14 16:31:32.798506] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:02.965 [2024-12-14 16:31:32.799513] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:02.965 [2024-12-14 16:31:32.800512] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:02.965 [2024-12-14 16:31:32.800604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:02.965 [2024-12-14 16:31:32.801529] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:02.965 [2024-12-14 16:31:32.801536] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:02.965 [2024-12-14 16:31:32.801540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:02.965 [2024-12-14 16:31:32.801560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:02.965 [2024-12-14 16:31:32.801567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:02.965 [2024-12-14 16:31:32.801581] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:02.965 [2024-12-14 16:31:32.801586] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:02.965 [2024-12-14 16:31:32.801590] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.965 [2024-12-14 16:31:32.801603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:02.965 [2024-12-14 16:31:32.801657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:02.965 [2024-12-14 16:31:32.801674] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:02.965 [2024-12-14 16:31:32.801678] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:02.965 [2024-12-14 16:31:32.801682] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:02.965 [2024-12-14 16:31:32.801687] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:02.965 [2024-12-14 16:31:32.801691] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:02.965 [2024-12-14 16:31:32.801695] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:02.965 [2024-12-14 16:31:32.801699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:02.965 [2024-12-14 16:31:32.801707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:02.965 [2024-12-14 16:31:32.801717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:02.965 [2024-12-14 16:31:32.801732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:02.965 [2024-12-14 16:31:32.801744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.965 [2024-12-14 16:31:32.801751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.965 [2024-12-14 16:31:32.801759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.965 [2024-12-14 16:31:32.801766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.965 [2024-12-14 16:31:32.801770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:02.965 [2024-12-14 16:31:32.801778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:02.965 [2024-12-14 16:31:32.801786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:02.965 [2024-12-14 16:31:32.801799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:02.965 [2024-12-14 16:31:32.801804] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:02.965 [2024-12-14 16:31:32.801808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:02.965 [2024-12-14 16:31:32.801814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:02.965 [2024-12-14 16:31:32.801819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:02.965 [2024-12-14 16:31:32.801827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:02.966 [2024-12-14 16:31:32.801836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:02.966 [2024-12-14 16:31:32.801884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.801892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.801899] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:02.966 [2024-12-14 16:31:32.801903] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:02.966 [2024-12-14 16:31:32.801906] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.966 [2024-12-14 16:31:32.801911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:02.966 [2024-12-14 16:31:32.801926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:02.966 [2024-12-14 16:31:32.801934] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:02.966 [2024-12-14 16:31:32.801944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.801951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.801957] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:02.966 [2024-12-14 16:31:32.801962] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:02.966 [2024-12-14 16:31:32.801965] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.966 [2024-12-14 16:31:32.801970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:02.966 [2024-12-14 16:31:32.801995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:02.966 [2024-12-14 16:31:32.802006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.802013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.802019] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:02.966 [2024-12-14 16:31:32.802023] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:02.966 [2024-12-14 16:31:32.802025] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.966 [2024-12-14 16:31:32.802031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:02.966 [2024-12-14 16:31:32.802040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:02.966 [2024-12-14 16:31:32.802047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.802053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.802059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.802064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.802069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.802073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.802078] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:02.966 [2024-12-14 16:31:32.802082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:02.966 [2024-12-14 16:31:32.802086] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:02.966 [2024-12-14 16:31:32.802103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:02.966 [2024-12-14 16:31:32.802112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:02.966 [2024-12-14 16:31:32.802122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:02.966 [2024-12-14 16:31:32.802130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:02.966 [2024-12-14 16:31:32.802140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:02.966 [2024-12-14 16:31:32.802148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:02.966 [2024-12-14 16:31:32.802158] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:02.966 [2024-12-14 16:31:32.802168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:02.966 [2024-12-14 16:31:32.802179] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:02.966 [2024-12-14 16:31:32.802183] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:02.966 [2024-12-14 16:31:32.802186] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:02.966 [2024-12-14 16:31:32.802189] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:02.966 [2024-12-14 16:31:32.802193] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:02.966 [2024-12-14 16:31:32.802198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:02.966 [2024-12-14 16:31:32.802204] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:02.966 [2024-12-14 16:31:32.802208] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:02.966 [2024-12-14 16:31:32.802211] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.966 [2024-12-14 16:31:32.802216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:02.966 [2024-12-14 16:31:32.802222] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:02.966 [2024-12-14 16:31:32.802225] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:02.966 [2024-12-14 16:31:32.802228] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.966 [2024-12-14 16:31:32.802233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:02.966 [2024-12-14 16:31:32.802239] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:02.966 [2024-12-14 16:31:32.802243] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:02.966 [2024-12-14 16:31:32.802246] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:02.966 [2024-12-14 16:31:32.802251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:02.966 [2024-12-14 16:31:32.802257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:02.966 [2024-12-14 16:31:32.802267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:02.966 [2024-12-14 16:31:32.802278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:02.966 [2024-12-14 16:31:32.802284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:02.966 ===================================================== 00:18:02.966 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:02.966 ===================================================== 00:18:02.966 Controller Capabilities/Features 00:18:02.966 ================================ 00:18:02.966 Vendor ID: 4e58 00:18:02.966 Subsystem Vendor ID: 4e58 00:18:02.966 Serial Number: SPDK1 00:18:02.966 Model Number: SPDK bdev Controller 00:18:02.966 Firmware Version: 25.01 00:18:02.966 Recommended Arb Burst: 6 00:18:02.966 IEEE OUI Identifier: 8d 6b 50 00:18:02.966 Multi-path I/O 00:18:02.966 May have multiple subsystem ports: Yes 00:18:02.966 May have multiple controllers: Yes 00:18:02.966 Associated with SR-IOV VF: No 00:18:02.966 Max Data Transfer Size: 131072 00:18:02.966 Max Number of Namespaces: 32 00:18:02.966 Max Number of I/O Queues: 127 00:18:02.966 NVMe Specification Version (VS): 1.3 00:18:02.966 NVMe Specification Version (Identify): 1.3 00:18:02.966 Maximum Queue Entries: 256 00:18:02.966 Contiguous Queues Required: Yes 00:18:02.966 Arbitration Mechanisms Supported 00:18:02.966 Weighted Round Robin: Not Supported 00:18:02.966 Vendor Specific: Not Supported 00:18:02.966 Reset Timeout: 15000 ms 00:18:02.966 Doorbell Stride: 4 bytes 00:18:02.966 NVM Subsystem Reset: Not Supported 00:18:02.966 Command Sets Supported 00:18:02.966 NVM Command Set: Supported 00:18:02.966 Boot Partition: Not Supported 00:18:02.966 Memory Page Size Minimum: 4096 bytes 00:18:02.966 Memory Page Size Maximum: 4096 bytes 00:18:02.966 Persistent Memory Region: Not Supported 00:18:02.966 Optional Asynchronous Events Supported 00:18:02.966 Namespace Attribute Notices: Supported 00:18:02.966 Firmware Activation Notices: Not Supported 00:18:02.966 ANA Change Notices: Not Supported 00:18:02.967 PLE Aggregate Log Change Notices: Not Supported 00:18:02.967 LBA Status Info Alert Notices: Not Supported 00:18:02.967 EGE Aggregate Log Change Notices: Not Supported 00:18:02.967 Normal NVM Subsystem Shutdown event: Not Supported 00:18:02.967 Zone Descriptor Change Notices: Not Supported 00:18:02.967 Discovery Log Change Notices: Not Supported 00:18:02.967 Controller Attributes 00:18:02.967 128-bit Host Identifier: Supported 00:18:02.967 Non-Operational Permissive Mode: Not Supported 00:18:02.967 NVM Sets: Not Supported 00:18:02.967 Read Recovery Levels: Not Supported 00:18:02.967 Endurance Groups: Not Supported 00:18:02.967 Predictable Latency Mode: Not Supported 00:18:02.967 Traffic Based Keep ALive: Not Supported 00:18:02.967 Namespace Granularity: Not Supported 00:18:02.967 SQ Associations: Not Supported 00:18:02.967 UUID List: Not Supported 00:18:02.967 Multi-Domain Subsystem: Not Supported 00:18:02.967 Fixed Capacity Management: Not Supported 00:18:02.967 Variable Capacity Management: Not Supported 00:18:02.967 Delete Endurance Group: Not Supported 00:18:02.967 Delete NVM Set: Not Supported 00:18:02.967 Extended LBA Formats Supported: Not Supported 00:18:02.967 Flexible Data Placement Supported: Not Supported 00:18:02.967 00:18:02.967 Controller Memory Buffer Support 00:18:02.967 ================================ 00:18:02.967 Supported: No 00:18:02.967 00:18:02.967 Persistent Memory Region Support 00:18:02.967 ================================ 00:18:02.967 Supported: No 00:18:02.967 00:18:02.967 Admin Command Set Attributes 00:18:02.967 ============================ 00:18:02.967 Security Send/Receive: Not Supported 00:18:02.967 Format NVM: Not Supported 00:18:02.967 Firmware Activate/Download: Not Supported 00:18:02.967 Namespace Management: Not Supported 00:18:02.967 Device Self-Test: Not Supported 00:18:02.967 Directives: Not Supported 00:18:02.967 NVMe-MI: Not Supported 00:18:02.967 Virtualization Management: Not Supported 00:18:02.967 Doorbell Buffer Config: Not Supported 00:18:02.967 Get LBA Status Capability: Not Supported 00:18:02.967 Command & Feature Lockdown Capability: Not Supported 00:18:02.967 Abort Command Limit: 4 00:18:02.967 Async Event Request Limit: 4 00:18:02.967 Number of Firmware Slots: N/A 00:18:02.967 Firmware Slot 1 Read-Only: N/A 00:18:02.967 Firmware Activation Without Reset: N/A 00:18:02.967 Multiple Update Detection Support: N/A 00:18:02.967 Firmware Update Granularity: No Information Provided 00:18:02.967 Per-Namespace SMART Log: No 00:18:02.967 Asymmetric Namespace Access Log Page: Not Supported 00:18:02.967 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:02.967 Command Effects Log Page: Supported 00:18:02.967 Get Log Page Extended Data: Supported 00:18:02.967 Telemetry Log Pages: Not Supported 00:18:02.967 Persistent Event Log Pages: Not Supported 00:18:02.967 Supported Log Pages Log Page: May Support 00:18:02.967 Commands Supported & Effects Log Page: Not Supported 00:18:02.967 Feature Identifiers & Effects Log Page:May Support 00:18:02.967 NVMe-MI Commands & Effects Log Page: May Support 00:18:02.967 Data Area 4 for Telemetry Log: Not Supported 00:18:02.967 Error Log Page Entries Supported: 128 00:18:02.967 Keep Alive: Supported 00:18:02.967 Keep Alive Granularity: 10000 ms 00:18:02.967 00:18:02.967 NVM Command Set Attributes 00:18:02.967 ========================== 00:18:02.967 Submission Queue Entry Size 00:18:02.967 Max: 64 00:18:02.967 Min: 64 00:18:02.967 Completion Queue Entry Size 00:18:02.967 Max: 16 00:18:02.967 Min: 16 00:18:02.967 Number of Namespaces: 32 00:18:02.967 Compare Command: Supported 00:18:02.967 Write Uncorrectable Command: Not Supported 00:18:02.967 Dataset Management Command: Supported 00:18:02.967 Write Zeroes Command: Supported 00:18:02.967 Set Features Save Field: Not Supported 00:18:02.967 Reservations: Not Supported 00:18:02.967 Timestamp: Not Supported 00:18:02.967 Copy: Supported 00:18:02.967 Volatile Write Cache: Present 00:18:02.967 Atomic Write Unit (Normal): 1 00:18:02.967 Atomic Write Unit (PFail): 1 00:18:02.967 Atomic Compare & Write Unit: 1 00:18:02.967 Fused Compare & Write: Supported 00:18:02.967 Scatter-Gather List 00:18:02.967 SGL Command Set: Supported (Dword aligned) 00:18:02.967 SGL Keyed: Not Supported 00:18:02.967 SGL Bit Bucket Descriptor: Not Supported 00:18:02.967 SGL Metadata Pointer: Not Supported 00:18:02.967 Oversized SGL: Not Supported 00:18:02.967 SGL Metadata Address: Not Supported 00:18:02.967 SGL Offset: Not Supported 00:18:02.967 Transport SGL Data Block: Not Supported 00:18:02.967 Replay Protected Memory Block: Not Supported 00:18:02.967 00:18:02.967 Firmware Slot Information 00:18:02.967 ========================= 00:18:02.967 Active slot: 1 00:18:02.967 Slot 1 Firmware Revision: 25.01 00:18:02.967 00:18:02.967 00:18:02.967 Commands Supported and Effects 00:18:02.967 ============================== 00:18:02.967 Admin Commands 00:18:02.967 -------------- 00:18:02.967 Get Log Page (02h): Supported 00:18:02.967 Identify (06h): Supported 00:18:02.967 Abort (08h): Supported 00:18:02.967 Set Features (09h): Supported 00:18:02.967 Get Features (0Ah): Supported 00:18:02.967 Asynchronous Event Request (0Ch): Supported 00:18:02.967 Keep Alive (18h): Supported 00:18:02.967 I/O Commands 00:18:02.967 ------------ 00:18:02.967 Flush (00h): Supported LBA-Change 00:18:02.967 Write (01h): Supported LBA-Change 00:18:02.967 Read (02h): Supported 00:18:02.967 Compare (05h): Supported 00:18:02.967 Write Zeroes (08h): Supported LBA-Change 00:18:02.967 Dataset Management (09h): Supported LBA-Change 00:18:02.967 Copy (19h): Supported LBA-Change 00:18:02.967 00:18:02.967 Error Log 00:18:02.967 ========= 00:18:02.967 00:18:02.967 Arbitration 00:18:02.967 =========== 00:18:02.967 Arbitration Burst: 1 00:18:02.967 00:18:02.967 Power Management 00:18:02.967 ================ 00:18:02.967 Number of Power States: 1 00:18:02.967 Current Power State: Power State #0 00:18:02.967 Power State #0: 00:18:02.967 Max Power: 0.00 W 00:18:02.967 Non-Operational State: Operational 00:18:02.967 Entry Latency: Not Reported 00:18:02.967 Exit Latency: Not Reported 00:18:02.967 Relative Read Throughput: 0 00:18:02.967 Relative Read Latency: 0 00:18:02.967 Relative Write Throughput: 0 00:18:02.967 Relative Write Latency: 0 00:18:02.967 Idle Power: Not Reported 00:18:02.967 Active Power: Not Reported 00:18:02.967 Non-Operational Permissive Mode: Not Supported 00:18:02.967 00:18:02.967 Health Information 00:18:02.967 ================== 00:18:02.967 Critical Warnings: 00:18:02.967 Available Spare Space: OK 00:18:02.967 Temperature: OK 00:18:02.967 Device Reliability: OK 00:18:02.967 Read Only: No 00:18:02.967 Volatile Memory Backup: OK 00:18:02.967 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:02.967 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:02.967 Available Spare: 0% 00:18:02.967 Available Sp[2024-12-14 16:31:32.802369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:02.967 [2024-12-14 16:31:32.802379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:02.967 [2024-12-14 16:31:32.802404] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:02.967 [2024-12-14 16:31:32.802413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.967 [2024-12-14 16:31:32.802419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.967 [2024-12-14 16:31:32.802425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.968 [2024-12-14 16:31:32.802430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.968 [2024-12-14 16:31:32.804564] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:02.968 [2024-12-14 16:31:32.804574] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:02.968 [2024-12-14 16:31:32.805538] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:02.968 [2024-12-14 16:31:32.805589] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:02.968 [2024-12-14 16:31:32.805595] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:02.968 [2024-12-14 16:31:32.806541] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:02.968 [2024-12-14 16:31:32.806551] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:02.968 [2024-12-14 16:31:32.806609] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:02.968 [2024-12-14 16:31:32.807577] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:02.968 are Threshold: 0% 00:18:02.968 Life Percentage Used: 0% 00:18:02.968 Data Units Read: 0 00:18:02.968 Data Units Written: 0 00:18:02.968 Host Read Commands: 0 00:18:02.968 Host Write Commands: 0 00:18:02.968 Controller Busy Time: 0 minutes 00:18:02.968 Power Cycles: 0 00:18:02.968 Power On Hours: 0 hours 00:18:02.968 Unsafe Shutdowns: 0 00:18:02.968 Unrecoverable Media Errors: 0 00:18:02.968 Lifetime Error Log Entries: 0 00:18:02.968 Warning Temperature Time: 0 minutes 00:18:02.968 Critical Temperature Time: 0 minutes 00:18:02.968 00:18:02.968 Number of Queues 00:18:02.968 ================ 00:18:02.968 Number of I/O Submission Queues: 127 00:18:02.968 Number of I/O Completion Queues: 127 00:18:02.968 00:18:02.968 Active Namespaces 00:18:02.968 ================= 00:18:02.968 Namespace ID:1 00:18:02.968 Error Recovery Timeout: Unlimited 00:18:02.968 Command Set Identifier: NVM (00h) 00:18:02.968 Deallocate: Supported 00:18:02.968 Deallocated/Unwritten Error: Not Supported 00:18:02.968 Deallocated Read Value: Unknown 00:18:02.968 Deallocate in Write Zeroes: Not Supported 00:18:02.968 Deallocated Guard Field: 0xFFFF 00:18:02.968 Flush: Supported 00:18:02.968 Reservation: Supported 00:18:02.968 Namespace Sharing Capabilities: Multiple Controllers 00:18:02.968 Size (in LBAs): 131072 (0GiB) 00:18:02.968 Capacity (in LBAs): 131072 (0GiB) 00:18:02.968 Utilization (in LBAs): 131072 (0GiB) 00:18:02.968 NGUID: 2D420FDFFFFD4FD287F460C6F721CA8A 00:18:02.968 UUID: 2d420fdf-fffd-4fd2-87f4-60c6f721ca8a 00:18:02.968 Thin Provisioning: Not Supported 00:18:02.968 Per-NS Atomic Units: Yes 00:18:02.968 Atomic Boundary Size (Normal): 0 00:18:02.968 Atomic Boundary Size (PFail): 0 00:18:02.968 Atomic Boundary Offset: 0 00:18:02.968 Maximum Single Source Range Length: 65535 00:18:02.968 Maximum Copy Length: 65535 00:18:02.968 Maximum Source Range Count: 1 00:18:02.968 NGUID/EUI64 Never Reused: No 00:18:02.968 Namespace Write Protected: No 00:18:02.968 Number of LBA Formats: 1 00:18:02.968 Current LBA Format: LBA Format #00 00:18:02.968 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:02.968 00:18:02.968 16:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:02.968 [2024-12-14 16:31:33.042612] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.237 Initializing NVMe Controllers 00:18:08.237 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:08.237 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:08.237 Initialization complete. Launching workers. 00:18:08.237 ======================================================== 00:18:08.237 Latency(us) 00:18:08.237 Device Information : IOPS MiB/s Average min max 00:18:08.237 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39959.50 156.09 3203.09 972.75 8597.95 00:18:08.237 ======================================================== 00:18:08.237 Total : 39959.50 156.09 3203.09 972.75 8597.95 00:18:08.237 00:18:08.237 [2024-12-14 16:31:38.062023] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.237 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:08.237 [2024-12-14 16:31:38.297080] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:13.696 Initializing NVMe Controllers 00:18:13.696 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:13.696 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:13.696 Initialization complete. Launching workers. 00:18:13.696 ======================================================== 00:18:13.696 Latency(us) 00:18:13.696 Device Information : IOPS MiB/s Average min max 00:18:13.696 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.12 62.65 7979.85 7470.02 8485.92 00:18:13.697 ======================================================== 00:18:13.697 Total : 16039.12 62.65 7979.85 7470.02 8485.92 00:18:13.697 00:18:13.697 [2024-12-14 16:31:43.331898] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:13.697 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:13.697 [2024-12-14 16:31:43.534856] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:18.964 [2024-12-14 16:31:48.595805] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:18.964 Initializing NVMe Controllers 00:18:18.964 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:18.964 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:18.964 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:18.964 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:18.964 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:18.964 Initialization complete. Launching workers. 00:18:18.964 Starting thread on core 2 00:18:18.964 Starting thread on core 3 00:18:18.964 Starting thread on core 1 00:18:18.964 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:18.964 [2024-12-14 16:31:48.894954] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:22.251 [2024-12-14 16:31:51.949029] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:22.251 Initializing NVMe Controllers 00:18:22.251 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:22.251 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:22.251 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:22.251 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:22.251 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:22.251 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:22.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:22.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:22.251 Initialization complete. Launching workers. 00:18:22.251 Starting thread on core 1 with urgent priority queue 00:18:22.251 Starting thread on core 2 with urgent priority queue 00:18:22.251 Starting thread on core 3 with urgent priority queue 00:18:22.251 Starting thread on core 0 with urgent priority queue 00:18:22.251 SPDK bdev Controller (SPDK1 ) core 0: 8087.67 IO/s 12.36 secs/100000 ios 00:18:22.251 SPDK bdev Controller (SPDK1 ) core 1: 5890.00 IO/s 16.98 secs/100000 ios 00:18:22.251 SPDK bdev Controller (SPDK1 ) core 2: 6601.33 IO/s 15.15 secs/100000 ios 00:18:22.251 SPDK bdev Controller (SPDK1 ) core 3: 5920.00 IO/s 16.89 secs/100000 ios 00:18:22.251 ======================================================== 00:18:22.251 00:18:22.251 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:22.251 [2024-12-14 16:31:52.230169] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:22.251 Initializing NVMe Controllers 00:18:22.251 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:22.251 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:22.251 Namespace ID: 1 size: 0GB 00:18:22.251 Initialization complete. 00:18:22.251 INFO: using host memory buffer for IO 00:18:22.251 Hello world! 00:18:22.251 [2024-12-14 16:31:52.267426] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:22.251 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:22.509 [2024-12-14 16:31:52.547440] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:23.884 Initializing NVMe Controllers 00:18:23.884 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:23.884 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:23.884 Initialization complete. Launching workers. 00:18:23.884 submit (in ns) avg, min, max = 5551.2, 3180.0, 3999357.1 00:18:23.884 complete (in ns) avg, min, max = 21356.3, 1764.8, 4994643.8 00:18:23.884 00:18:23.884 Submit histogram 00:18:23.884 ================ 00:18:23.884 Range in us Cumulative Count 00:18:23.884 3.170 - 3.185: 0.0061% ( 1) 00:18:23.884 3.185 - 3.200: 0.2789% ( 45) 00:18:23.884 3.200 - 3.215: 2.1042% ( 301) 00:18:23.884 3.215 - 3.230: 6.7128% ( 760) 00:18:23.884 3.230 - 3.246: 12.2734% ( 917) 00:18:23.884 3.246 - 3.261: 18.8527% ( 1085) 00:18:23.884 3.261 - 3.276: 26.8449% ( 1318) 00:18:23.884 3.276 - 3.291: 32.8422% ( 989) 00:18:23.884 3.291 - 3.307: 37.4689% ( 763) 00:18:23.884 3.307 - 3.322: 42.1988% ( 780) 00:18:23.884 3.322 - 3.337: 47.3652% ( 852) 00:18:23.884 3.337 - 3.352: 51.4948% ( 681) 00:18:23.884 3.352 - 3.368: 57.1888% ( 939) 00:18:23.884 3.368 - 3.383: 64.2047% ( 1157) 00:18:23.884 3.383 - 3.398: 69.2135% ( 826) 00:18:23.884 3.398 - 3.413: 75.2835% ( 1001) 00:18:23.884 3.413 - 3.429: 79.8799% ( 758) 00:18:23.884 3.429 - 3.444: 83.1726% ( 543) 00:18:23.884 3.444 - 3.459: 84.9009% ( 285) 00:18:23.884 3.459 - 3.474: 86.1743% ( 210) 00:18:23.884 3.474 - 3.490: 86.9747% ( 132) 00:18:23.884 3.490 - 3.505: 87.4659% ( 81) 00:18:23.884 3.505 - 3.520: 88.2057% ( 122) 00:18:23.884 3.520 - 3.535: 89.0668% ( 142) 00:18:23.884 3.535 - 3.550: 90.0794% ( 167) 00:18:23.884 3.550 - 3.566: 91.1346% ( 174) 00:18:23.884 3.566 - 3.581: 91.9835% ( 140) 00:18:23.884 3.581 - 3.596: 92.7839% ( 132) 00:18:23.884 3.596 - 3.611: 93.7663% ( 162) 00:18:23.884 3.611 - 3.627: 94.7547% ( 163) 00:18:23.884 3.627 - 3.642: 95.6279% ( 144) 00:18:23.884 3.642 - 3.657: 96.4951% ( 143) 00:18:23.884 3.657 - 3.672: 97.1863% ( 114) 00:18:23.884 3.672 - 3.688: 97.7564% ( 94) 00:18:23.884 3.688 - 3.703: 98.1869% ( 71) 00:18:23.884 3.703 - 3.718: 98.4537% ( 44) 00:18:23.884 3.718 - 3.733: 98.8054% ( 58) 00:18:23.884 3.733 - 3.749: 99.0783% ( 45) 00:18:23.884 3.749 - 3.764: 99.2238% ( 24) 00:18:23.884 3.764 - 3.779: 99.3633% ( 23) 00:18:23.884 3.779 - 3.794: 99.4300% ( 11) 00:18:23.884 3.794 - 3.810: 99.4967% ( 11) 00:18:23.884 3.810 - 3.825: 99.5331% ( 6) 00:18:23.884 3.825 - 3.840: 99.5513% ( 3) 00:18:23.884 3.840 - 3.855: 99.5573% ( 1) 00:18:23.884 3.855 - 3.870: 99.5695% ( 2) 00:18:23.884 3.870 - 3.886: 99.5937% ( 4) 00:18:23.884 3.886 - 3.901: 99.6240% ( 5) 00:18:23.884 3.931 - 3.962: 99.6301% ( 1) 00:18:23.884 4.053 - 4.084: 99.6422% ( 2) 00:18:23.884 4.084 - 4.114: 99.6544% ( 2) 00:18:23.884 4.145 - 4.175: 99.6604% ( 1) 00:18:23.884 4.175 - 4.206: 99.6665% ( 1) 00:18:23.884 5.029 - 5.059: 99.6725% ( 1) 00:18:23.884 5.242 - 5.272: 99.6847% ( 2) 00:18:23.884 5.272 - 5.303: 99.6907% ( 1) 00:18:23.884 5.333 - 5.364: 99.6968% ( 1) 00:18:23.884 5.364 - 5.394: 99.7029% ( 1) 00:18:23.884 5.425 - 5.455: 99.7089% ( 1) 00:18:23.884 5.577 - 5.608: 99.7150% ( 1) 00:18:23.884 5.608 - 5.638: 99.7211% ( 1) 00:18:23.884 5.638 - 5.669: 99.7332% ( 2) 00:18:23.884 5.699 - 5.730: 99.7393% ( 1) 00:18:23.884 5.730 - 5.760: 99.7453% ( 1) 00:18:23.884 5.760 - 5.790: 99.7696% ( 4) 00:18:23.884 5.851 - 5.882: 99.7817% ( 2) 00:18:23.884 5.912 - 5.943: 99.7999% ( 3) 00:18:23.884 5.943 - 5.973: 99.8060% ( 1) 00:18:23.884 5.973 - 6.004: 99.8120% ( 1) 00:18:23.884 6.004 - 6.034: 99.8241% ( 2) 00:18:23.884 6.095 - 6.126: 99.8363% ( 2) 00:18:23.884 6.126 - 6.156: 99.8423% ( 1) 00:18:23.884 6.156 - 6.187: 99.8484% ( 1) 00:18:23.884 6.187 - 6.217: 99.8545% ( 1) 00:18:23.884 6.278 - 6.309: 99.8605% ( 1) 00:18:23.884 6.339 - 6.370: 99.8666% ( 1) 00:18:23.884 6.370 - 6.400: 99.8787% ( 2) 00:18:23.884 6.522 - 6.552: 99.8848% ( 1) 00:18:23.884 6.613 - 6.644: 99.9030% ( 3) 00:18:23.884 [2024-12-14 16:31:53.569462] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:23.884 6.674 - 6.705: 99.9090% ( 1) 00:18:23.884 6.796 - 6.827: 99.9151% ( 1) 00:18:23.884 6.857 - 6.888: 99.9212% ( 1) 00:18:23.884 6.918 - 6.949: 99.9272% ( 1) 00:18:23.884 7.010 - 7.040: 99.9333% ( 1) 00:18:23.884 7.589 - 7.619: 99.9394% ( 1) 00:18:23.884 7.619 - 7.650: 99.9454% ( 1) 00:18:23.884 3994.575 - 4025.783: 100.0000% ( 9) 00:18:23.884 00:18:23.884 Complete histogram 00:18:23.884 ================== 00:18:23.884 Range in us Cumulative Count 00:18:23.884 1.760 - 1.768: 0.0061% ( 1) 00:18:23.884 1.768 - 1.775: 0.3820% ( 62) 00:18:23.884 1.775 - 1.783: 4.7723% ( 724) 00:18:23.884 1.783 - 1.790: 20.1686% ( 2539) 00:18:23.884 1.790 - 1.798: 39.9976% ( 3270) 00:18:23.884 1.798 - 1.806: 51.7434% ( 1937) 00:18:23.884 1.806 - 1.813: 55.7456% ( 660) 00:18:23.884 1.813 - 1.821: 57.8315% ( 344) 00:18:23.884 1.821 - 1.829: 59.2202% ( 229) 00:18:23.884 1.829 - 1.836: 60.9484% ( 285) 00:18:23.884 1.836 - 1.844: 66.5151% ( 918) 00:18:23.884 1.844 - 1.851: 76.8237% ( 1700) 00:18:23.884 1.851 - 1.859: 86.5502% ( 1604) 00:18:23.884 1.859 - 1.867: 92.1169% ( 918) 00:18:23.884 1.867 - 1.874: 94.7608% ( 436) 00:18:23.884 1.874 - 1.882: 96.2464% ( 245) 00:18:23.884 1.882 - 1.890: 97.2773% ( 170) 00:18:23.884 1.890 - 1.897: 97.7018% ( 70) 00:18:23.884 1.897 - 1.905: 97.8958% ( 32) 00:18:23.884 1.905 - 1.912: 98.1081% ( 35) 00:18:23.884 1.912 - 1.920: 98.3385% ( 38) 00:18:23.884 1.920 - 1.928: 98.6235% ( 47) 00:18:23.884 1.928 - 1.935: 98.8357% ( 35) 00:18:23.884 1.935 - 1.943: 98.9995% ( 27) 00:18:23.884 1.943 - 1.950: 99.0722% ( 12) 00:18:23.884 1.950 - 1.966: 99.1571% ( 14) 00:18:23.884 1.966 - 1.981: 99.2178% ( 10) 00:18:23.884 1.981 - 1.996: 99.2420% ( 4) 00:18:23.884 1.996 - 2.011: 99.2541% ( 2) 00:18:23.884 2.011 - 2.027: 99.2602% ( 1) 00:18:23.884 2.027 - 2.042: 99.2723% ( 2) 00:18:23.884 2.042 - 2.057: 99.2784% ( 1) 00:18:23.884 2.057 - 2.072: 99.2905% ( 2) 00:18:23.884 2.072 - 2.088: 99.2966% ( 1) 00:18:23.884 2.103 - 2.118: 99.3026% ( 1) 00:18:23.884 2.118 - 2.133: 99.3087% ( 1) 00:18:23.884 2.149 - 2.164: 99.3148% ( 1) 00:18:23.884 2.164 - 2.179: 99.3269% ( 2) 00:18:23.884 2.179 - 2.194: 99.3330% ( 1) 00:18:23.884 2.194 - 2.210: 99.3451% ( 2) 00:18:23.884 2.210 - 2.225: 99.3512% ( 1) 00:18:23.884 2.225 - 2.240: 99.3633% ( 2) 00:18:23.884 2.331 - 2.347: 99.3694% ( 1) 00:18:23.884 2.347 - 2.362: 99.3754% ( 1) 00:18:23.884 2.560 - 2.575: 99.3815% ( 1) 00:18:23.884 3.520 - 3.535: 99.3875% ( 1) 00:18:23.884 3.581 - 3.596: 99.3936% ( 1) 00:18:23.884 3.627 - 3.642: 99.3997% ( 1) 00:18:23.885 3.703 - 3.718: 99.4057% ( 1) 00:18:23.885 3.718 - 3.733: 99.4118% ( 1) 00:18:23.885 3.825 - 3.840: 99.4179% ( 1) 00:18:23.885 4.053 - 4.084: 99.4361% ( 3) 00:18:23.885 4.297 - 4.328: 99.4421% ( 1) 00:18:23.885 4.419 - 4.450: 99.4482% ( 1) 00:18:23.885 4.480 - 4.510: 99.4542% ( 1) 00:18:23.885 4.510 - 4.541: 99.4603% ( 1) 00:18:23.885 4.663 - 4.693: 99.4664% ( 1) 00:18:23.885 4.937 - 4.968: 99.4724% ( 1) 00:18:23.885 5.059 - 5.090: 99.4785% ( 1) 00:18:23.885 5.516 - 5.547: 99.4846% ( 1) 00:18:23.885 5.699 - 5.730: 99.4906% ( 1) 00:18:23.885 5.790 - 5.821: 99.4967% ( 1) 00:18:23.885 7.284 - 7.314: 99.5028% ( 1) 00:18:23.885 10.118 - 10.179: 99.5088% ( 1) 00:18:23.885 2075.307 - 2090.910: 99.5149% ( 1) 00:18:23.885 3495.253 - 3510.857: 99.5210% ( 1) 00:18:23.885 3978.971 - 3994.575: 99.5270% ( 1) 00:18:23.885 3994.575 - 4025.783: 99.9939% ( 77) 00:18:23.885 4993.219 - 5024.427: 100.0000% ( 1) 00:18:23.885 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:23.885 [ 00:18:23.885 { 00:18:23.885 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:23.885 "subtype": "Discovery", 00:18:23.885 "listen_addresses": [], 00:18:23.885 "allow_any_host": true, 00:18:23.885 "hosts": [] 00:18:23.885 }, 00:18:23.885 { 00:18:23.885 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:23.885 "subtype": "NVMe", 00:18:23.885 "listen_addresses": [ 00:18:23.885 { 00:18:23.885 "trtype": "VFIOUSER", 00:18:23.885 "adrfam": "IPv4", 00:18:23.885 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:23.885 "trsvcid": "0" 00:18:23.885 } 00:18:23.885 ], 00:18:23.885 "allow_any_host": true, 00:18:23.885 "hosts": [], 00:18:23.885 "serial_number": "SPDK1", 00:18:23.885 "model_number": "SPDK bdev Controller", 00:18:23.885 "max_namespaces": 32, 00:18:23.885 "min_cntlid": 1, 00:18:23.885 "max_cntlid": 65519, 00:18:23.885 "namespaces": [ 00:18:23.885 { 00:18:23.885 "nsid": 1, 00:18:23.885 "bdev_name": "Malloc1", 00:18:23.885 "name": "Malloc1", 00:18:23.885 "nguid": "2D420FDFFFFD4FD287F460C6F721CA8A", 00:18:23.885 "uuid": "2d420fdf-fffd-4fd2-87f4-60c6f721ca8a" 00:18:23.885 } 00:18:23.885 ] 00:18:23.885 }, 00:18:23.885 { 00:18:23.885 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:23.885 "subtype": "NVMe", 00:18:23.885 "listen_addresses": [ 00:18:23.885 { 00:18:23.885 "trtype": "VFIOUSER", 00:18:23.885 "adrfam": "IPv4", 00:18:23.885 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:23.885 "trsvcid": "0" 00:18:23.885 } 00:18:23.885 ], 00:18:23.885 "allow_any_host": true, 00:18:23.885 "hosts": [], 00:18:23.885 "serial_number": "SPDK2", 00:18:23.885 "model_number": "SPDK bdev Controller", 00:18:23.885 "max_namespaces": 32, 00:18:23.885 "min_cntlid": 1, 00:18:23.885 "max_cntlid": 65519, 00:18:23.885 "namespaces": [ 00:18:23.885 { 00:18:23.885 "nsid": 1, 00:18:23.885 "bdev_name": "Malloc2", 00:18:23.885 "name": "Malloc2", 00:18:23.885 "nguid": "CBF6EDD8586C41D68B81B997A7E6B379", 00:18:23.885 "uuid": "cbf6edd8-586c-41d6-8b81-b997a7e6b379" 00:18:23.885 } 00:18:23.885 ] 00:18:23.885 } 00:18:23.885 ] 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=969152 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:23.885 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:24.143 [2024-12-14 16:31:53.980049] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:24.143 Malloc3 00:18:24.143 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:24.401 [2024-12-14 16:31:54.237983] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:24.401 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:24.401 Asynchronous Event Request test 00:18:24.401 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:24.401 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:24.401 Registering asynchronous event callbacks... 00:18:24.401 Starting namespace attribute notice tests for all controllers... 00:18:24.401 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:24.401 aer_cb - Changed Namespace 00:18:24.401 Cleaning up... 00:18:24.401 [ 00:18:24.401 { 00:18:24.402 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:24.402 "subtype": "Discovery", 00:18:24.402 "listen_addresses": [], 00:18:24.402 "allow_any_host": true, 00:18:24.402 "hosts": [] 00:18:24.402 }, 00:18:24.402 { 00:18:24.402 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:24.402 "subtype": "NVMe", 00:18:24.402 "listen_addresses": [ 00:18:24.402 { 00:18:24.402 "trtype": "VFIOUSER", 00:18:24.402 "adrfam": "IPv4", 00:18:24.402 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:24.402 "trsvcid": "0" 00:18:24.402 } 00:18:24.402 ], 00:18:24.402 "allow_any_host": true, 00:18:24.402 "hosts": [], 00:18:24.402 "serial_number": "SPDK1", 00:18:24.402 "model_number": "SPDK bdev Controller", 00:18:24.402 "max_namespaces": 32, 00:18:24.402 "min_cntlid": 1, 00:18:24.402 "max_cntlid": 65519, 00:18:24.402 "namespaces": [ 00:18:24.402 { 00:18:24.402 "nsid": 1, 00:18:24.402 "bdev_name": "Malloc1", 00:18:24.402 "name": "Malloc1", 00:18:24.402 "nguid": "2D420FDFFFFD4FD287F460C6F721CA8A", 00:18:24.402 "uuid": "2d420fdf-fffd-4fd2-87f4-60c6f721ca8a" 00:18:24.402 }, 00:18:24.402 { 00:18:24.402 "nsid": 2, 00:18:24.402 "bdev_name": "Malloc3", 00:18:24.402 "name": "Malloc3", 00:18:24.402 "nguid": "3F0F5FC59B1A40AB94387EE6148D0BE6", 00:18:24.402 "uuid": "3f0f5fc5-9b1a-40ab-9438-7ee6148d0be6" 00:18:24.402 } 00:18:24.402 ] 00:18:24.402 }, 00:18:24.402 { 00:18:24.402 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:24.402 "subtype": "NVMe", 00:18:24.402 "listen_addresses": [ 00:18:24.402 { 00:18:24.402 "trtype": "VFIOUSER", 00:18:24.402 "adrfam": "IPv4", 00:18:24.402 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:24.402 "trsvcid": "0" 00:18:24.402 } 00:18:24.402 ], 00:18:24.402 "allow_any_host": true, 00:18:24.402 "hosts": [], 00:18:24.402 "serial_number": "SPDK2", 00:18:24.402 "model_number": "SPDK bdev Controller", 00:18:24.402 "max_namespaces": 32, 00:18:24.402 "min_cntlid": 1, 00:18:24.402 "max_cntlid": 65519, 00:18:24.402 "namespaces": [ 00:18:24.402 { 00:18:24.402 "nsid": 1, 00:18:24.402 "bdev_name": "Malloc2", 00:18:24.402 "name": "Malloc2", 00:18:24.402 "nguid": "CBF6EDD8586C41D68B81B997A7E6B379", 00:18:24.402 "uuid": "cbf6edd8-586c-41d6-8b81-b997a7e6b379" 00:18:24.402 } 00:18:24.402 ] 00:18:24.402 } 00:18:24.402 ] 00:18:24.402 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 969152 00:18:24.402 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:24.402 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:24.402 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:24.402 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:24.402 [2024-12-14 16:31:54.472635] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:24.402 [2024-12-14 16:31:54.472668] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid969204 ] 00:18:24.662 [2024-12-14 16:31:54.509058] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:24.662 [2024-12-14 16:31:54.517788] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:24.662 [2024-12-14 16:31:54.517810] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f848805c000 00:18:24.662 [2024-12-14 16:31:54.518785] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:24.662 [2024-12-14 16:31:54.519793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:24.662 [2024-12-14 16:31:54.520795] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:24.662 [2024-12-14 16:31:54.521804] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:24.662 [2024-12-14 16:31:54.522812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:24.662 [2024-12-14 16:31:54.523823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:24.662 [2024-12-14 16:31:54.524832] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:24.662 [2024-12-14 16:31:54.525838] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:24.662 [2024-12-14 16:31:54.526841] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:24.662 [2024-12-14 16:31:54.526850] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8486d66000 00:18:24.662 [2024-12-14 16:31:54.527763] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:24.662 [2024-12-14 16:31:54.541123] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:24.662 [2024-12-14 16:31:54.541153] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:24.662 [2024-12-14 16:31:54.543202] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:24.662 [2024-12-14 16:31:54.543236] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:24.662 [2024-12-14 16:31:54.543307] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:24.662 [2024-12-14 16:31:54.543321] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:24.662 [2024-12-14 16:31:54.543326] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:24.662 [2024-12-14 16:31:54.544212] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:24.662 [2024-12-14 16:31:54.544221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:24.662 [2024-12-14 16:31:54.544228] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:24.662 [2024-12-14 16:31:54.545214] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:24.662 [2024-12-14 16:31:54.545222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:24.662 [2024-12-14 16:31:54.545229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:24.662 [2024-12-14 16:31:54.546225] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:24.662 [2024-12-14 16:31:54.546234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:24.662 [2024-12-14 16:31:54.547231] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:24.662 [2024-12-14 16:31:54.547239] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:24.662 [2024-12-14 16:31:54.547244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:24.662 [2024-12-14 16:31:54.547249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:24.662 [2024-12-14 16:31:54.547357] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:24.662 [2024-12-14 16:31:54.547361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:24.662 [2024-12-14 16:31:54.547366] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:24.662 [2024-12-14 16:31:54.548233] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:24.662 [2024-12-14 16:31:54.549236] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:24.662 [2024-12-14 16:31:54.550247] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:24.662 [2024-12-14 16:31:54.551252] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:24.662 [2024-12-14 16:31:54.551289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:24.662 [2024-12-14 16:31:54.552264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:24.663 [2024-12-14 16:31:54.552272] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:24.663 [2024-12-14 16:31:54.552277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.552294] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:24.663 [2024-12-14 16:31:54.552304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.552313] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:24.663 [2024-12-14 16:31:54.552318] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:24.663 [2024-12-14 16:31:54.552322] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.663 [2024-12-14 16:31:54.552332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:24.663 [2024-12-14 16:31:54.558565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:24.663 [2024-12-14 16:31:54.558576] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:24.663 [2024-12-14 16:31:54.558580] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:24.663 [2024-12-14 16:31:54.558584] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:24.663 [2024-12-14 16:31:54.558589] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:24.663 [2024-12-14 16:31:54.558593] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:24.663 [2024-12-14 16:31:54.558597] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:24.663 [2024-12-14 16:31:54.558602] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.558611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.558622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:24.663 [2024-12-14 16:31:54.566564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:24.663 [2024-12-14 16:31:54.566576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.663 [2024-12-14 16:31:54.566584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.663 [2024-12-14 16:31:54.566591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.663 [2024-12-14 16:31:54.566601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.663 [2024-12-14 16:31:54.566605] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.566613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.566622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:24.663 [2024-12-14 16:31:54.574564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:24.663 [2024-12-14 16:31:54.574573] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:24.663 [2024-12-14 16:31:54.574577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.574583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.574589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.574596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:24.663 [2024-12-14 16:31:54.582561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:24.663 [2024-12-14 16:31:54.582611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.582620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.582627] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:24.663 [2024-12-14 16:31:54.582631] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:24.663 [2024-12-14 16:31:54.582634] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.663 [2024-12-14 16:31:54.582640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:24.663 [2024-12-14 16:31:54.590562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:24.663 [2024-12-14 16:31:54.590571] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:24.663 [2024-12-14 16:31:54.590581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.590588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.590594] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:24.663 [2024-12-14 16:31:54.590598] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:24.663 [2024-12-14 16:31:54.590601] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.663 [2024-12-14 16:31:54.590607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:24.663 [2024-12-14 16:31:54.598561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:24.663 [2024-12-14 16:31:54.598574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.598581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.598587] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:24.663 [2024-12-14 16:31:54.598591] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:24.663 [2024-12-14 16:31:54.598594] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.663 [2024-12-14 16:31:54.598599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:24.663 [2024-12-14 16:31:54.606562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:24.663 [2024-12-14 16:31:54.606571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.606576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.606584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.606589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.606594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.606598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.606603] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:24.663 [2024-12-14 16:31:54.606607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:24.663 [2024-12-14 16:31:54.606612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:24.663 [2024-12-14 16:31:54.606627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:24.663 [2024-12-14 16:31:54.614562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:24.663 [2024-12-14 16:31:54.614575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:24.663 [2024-12-14 16:31:54.622560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:24.663 [2024-12-14 16:31:54.622572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:24.663 [2024-12-14 16:31:54.630560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:24.663 [2024-12-14 16:31:54.630572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:24.663 [2024-12-14 16:31:54.638562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:24.663 [2024-12-14 16:31:54.638579] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:24.663 [2024-12-14 16:31:54.638583] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:24.663 [2024-12-14 16:31:54.638586] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:24.663 [2024-12-14 16:31:54.638589] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:24.663 [2024-12-14 16:31:54.638592] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:24.663 [2024-12-14 16:31:54.638598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:24.663 [2024-12-14 16:31:54.638604] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:24.663 [2024-12-14 16:31:54.638608] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:24.663 [2024-12-14 16:31:54.638611] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.663 [2024-12-14 16:31:54.638616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:24.664 [2024-12-14 16:31:54.638622] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:24.664 [2024-12-14 16:31:54.638626] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:24.664 [2024-12-14 16:31:54.638629] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.664 [2024-12-14 16:31:54.638634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:24.664 [2024-12-14 16:31:54.638640] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:24.664 [2024-12-14 16:31:54.638644] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:24.664 [2024-12-14 16:31:54.638647] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:24.664 [2024-12-14 16:31:54.638652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:24.664 [2024-12-14 16:31:54.646561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:24.664 [2024-12-14 16:31:54.646575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:24.664 [2024-12-14 16:31:54.646584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:24.664 [2024-12-14 16:31:54.646591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:24.664 ===================================================== 00:18:24.664 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:24.664 ===================================================== 00:18:24.664 Controller Capabilities/Features 00:18:24.664 ================================ 00:18:24.664 Vendor ID: 4e58 00:18:24.664 Subsystem Vendor ID: 4e58 00:18:24.664 Serial Number: SPDK2 00:18:24.664 Model Number: SPDK bdev Controller 00:18:24.664 Firmware Version: 25.01 00:18:24.664 Recommended Arb Burst: 6 00:18:24.664 IEEE OUI Identifier: 8d 6b 50 00:18:24.664 Multi-path I/O 00:18:24.664 May have multiple subsystem ports: Yes 00:18:24.664 May have multiple controllers: Yes 00:18:24.664 Associated with SR-IOV VF: No 00:18:24.664 Max Data Transfer Size: 131072 00:18:24.664 Max Number of Namespaces: 32 00:18:24.664 Max Number of I/O Queues: 127 00:18:24.664 NVMe Specification Version (VS): 1.3 00:18:24.664 NVMe Specification Version (Identify): 1.3 00:18:24.664 Maximum Queue Entries: 256 00:18:24.664 Contiguous Queues Required: Yes 00:18:24.664 Arbitration Mechanisms Supported 00:18:24.664 Weighted Round Robin: Not Supported 00:18:24.664 Vendor Specific: Not Supported 00:18:24.664 Reset Timeout: 15000 ms 00:18:24.664 Doorbell Stride: 4 bytes 00:18:24.664 NVM Subsystem Reset: Not Supported 00:18:24.664 Command Sets Supported 00:18:24.664 NVM Command Set: Supported 00:18:24.664 Boot Partition: Not Supported 00:18:24.664 Memory Page Size Minimum: 4096 bytes 00:18:24.664 Memory Page Size Maximum: 4096 bytes 00:18:24.664 Persistent Memory Region: Not Supported 00:18:24.664 Optional Asynchronous Events Supported 00:18:24.664 Namespace Attribute Notices: Supported 00:18:24.664 Firmware Activation Notices: Not Supported 00:18:24.664 ANA Change Notices: Not Supported 00:18:24.664 PLE Aggregate Log Change Notices: Not Supported 00:18:24.664 LBA Status Info Alert Notices: Not Supported 00:18:24.664 EGE Aggregate Log Change Notices: Not Supported 00:18:24.664 Normal NVM Subsystem Shutdown event: Not Supported 00:18:24.664 Zone Descriptor Change Notices: Not Supported 00:18:24.664 Discovery Log Change Notices: Not Supported 00:18:24.664 Controller Attributes 00:18:24.664 128-bit Host Identifier: Supported 00:18:24.664 Non-Operational Permissive Mode: Not Supported 00:18:24.664 NVM Sets: Not Supported 00:18:24.664 Read Recovery Levels: Not Supported 00:18:24.664 Endurance Groups: Not Supported 00:18:24.664 Predictable Latency Mode: Not Supported 00:18:24.664 Traffic Based Keep ALive: Not Supported 00:18:24.664 Namespace Granularity: Not Supported 00:18:24.664 SQ Associations: Not Supported 00:18:24.664 UUID List: Not Supported 00:18:24.664 Multi-Domain Subsystem: Not Supported 00:18:24.664 Fixed Capacity Management: Not Supported 00:18:24.664 Variable Capacity Management: Not Supported 00:18:24.664 Delete Endurance Group: Not Supported 00:18:24.664 Delete NVM Set: Not Supported 00:18:24.664 Extended LBA Formats Supported: Not Supported 00:18:24.664 Flexible Data Placement Supported: Not Supported 00:18:24.664 00:18:24.664 Controller Memory Buffer Support 00:18:24.664 ================================ 00:18:24.664 Supported: No 00:18:24.664 00:18:24.664 Persistent Memory Region Support 00:18:24.664 ================================ 00:18:24.664 Supported: No 00:18:24.664 00:18:24.664 Admin Command Set Attributes 00:18:24.664 ============================ 00:18:24.664 Security Send/Receive: Not Supported 00:18:24.664 Format NVM: Not Supported 00:18:24.664 Firmware Activate/Download: Not Supported 00:18:24.664 Namespace Management: Not Supported 00:18:24.664 Device Self-Test: Not Supported 00:18:24.664 Directives: Not Supported 00:18:24.664 NVMe-MI: Not Supported 00:18:24.664 Virtualization Management: Not Supported 00:18:24.664 Doorbell Buffer Config: Not Supported 00:18:24.664 Get LBA Status Capability: Not Supported 00:18:24.664 Command & Feature Lockdown Capability: Not Supported 00:18:24.664 Abort Command Limit: 4 00:18:24.664 Async Event Request Limit: 4 00:18:24.664 Number of Firmware Slots: N/A 00:18:24.664 Firmware Slot 1 Read-Only: N/A 00:18:24.664 Firmware Activation Without Reset: N/A 00:18:24.664 Multiple Update Detection Support: N/A 00:18:24.664 Firmware Update Granularity: No Information Provided 00:18:24.664 Per-Namespace SMART Log: No 00:18:24.664 Asymmetric Namespace Access Log Page: Not Supported 00:18:24.664 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:24.664 Command Effects Log Page: Supported 00:18:24.664 Get Log Page Extended Data: Supported 00:18:24.664 Telemetry Log Pages: Not Supported 00:18:24.664 Persistent Event Log Pages: Not Supported 00:18:24.664 Supported Log Pages Log Page: May Support 00:18:24.664 Commands Supported & Effects Log Page: Not Supported 00:18:24.664 Feature Identifiers & Effects Log Page:May Support 00:18:24.664 NVMe-MI Commands & Effects Log Page: May Support 00:18:24.664 Data Area 4 for Telemetry Log: Not Supported 00:18:24.664 Error Log Page Entries Supported: 128 00:18:24.664 Keep Alive: Supported 00:18:24.664 Keep Alive Granularity: 10000 ms 00:18:24.664 00:18:24.664 NVM Command Set Attributes 00:18:24.664 ========================== 00:18:24.664 Submission Queue Entry Size 00:18:24.664 Max: 64 00:18:24.664 Min: 64 00:18:24.664 Completion Queue Entry Size 00:18:24.664 Max: 16 00:18:24.664 Min: 16 00:18:24.664 Number of Namespaces: 32 00:18:24.664 Compare Command: Supported 00:18:24.664 Write Uncorrectable Command: Not Supported 00:18:24.664 Dataset Management Command: Supported 00:18:24.664 Write Zeroes Command: Supported 00:18:24.664 Set Features Save Field: Not Supported 00:18:24.664 Reservations: Not Supported 00:18:24.664 Timestamp: Not Supported 00:18:24.664 Copy: Supported 00:18:24.664 Volatile Write Cache: Present 00:18:24.664 Atomic Write Unit (Normal): 1 00:18:24.664 Atomic Write Unit (PFail): 1 00:18:24.664 Atomic Compare & Write Unit: 1 00:18:24.664 Fused Compare & Write: Supported 00:18:24.664 Scatter-Gather List 00:18:24.664 SGL Command Set: Supported (Dword aligned) 00:18:24.664 SGL Keyed: Not Supported 00:18:24.664 SGL Bit Bucket Descriptor: Not Supported 00:18:24.664 SGL Metadata Pointer: Not Supported 00:18:24.664 Oversized SGL: Not Supported 00:18:24.664 SGL Metadata Address: Not Supported 00:18:24.664 SGL Offset: Not Supported 00:18:24.664 Transport SGL Data Block: Not Supported 00:18:24.664 Replay Protected Memory Block: Not Supported 00:18:24.664 00:18:24.664 Firmware Slot Information 00:18:24.664 ========================= 00:18:24.664 Active slot: 1 00:18:24.664 Slot 1 Firmware Revision: 25.01 00:18:24.664 00:18:24.664 00:18:24.664 Commands Supported and Effects 00:18:24.664 ============================== 00:18:24.664 Admin Commands 00:18:24.664 -------------- 00:18:24.664 Get Log Page (02h): Supported 00:18:24.664 Identify (06h): Supported 00:18:24.664 Abort (08h): Supported 00:18:24.664 Set Features (09h): Supported 00:18:24.664 Get Features (0Ah): Supported 00:18:24.664 Asynchronous Event Request (0Ch): Supported 00:18:24.664 Keep Alive (18h): Supported 00:18:24.664 I/O Commands 00:18:24.664 ------------ 00:18:24.664 Flush (00h): Supported LBA-Change 00:18:24.664 Write (01h): Supported LBA-Change 00:18:24.664 Read (02h): Supported 00:18:24.665 Compare (05h): Supported 00:18:24.665 Write Zeroes (08h): Supported LBA-Change 00:18:24.665 Dataset Management (09h): Supported LBA-Change 00:18:24.665 Copy (19h): Supported LBA-Change 00:18:24.665 00:18:24.665 Error Log 00:18:24.665 ========= 00:18:24.665 00:18:24.665 Arbitration 00:18:24.665 =========== 00:18:24.665 Arbitration Burst: 1 00:18:24.665 00:18:24.665 Power Management 00:18:24.665 ================ 00:18:24.665 Number of Power States: 1 00:18:24.665 Current Power State: Power State #0 00:18:24.665 Power State #0: 00:18:24.665 Max Power: 0.00 W 00:18:24.665 Non-Operational State: Operational 00:18:24.665 Entry Latency: Not Reported 00:18:24.665 Exit Latency: Not Reported 00:18:24.665 Relative Read Throughput: 0 00:18:24.665 Relative Read Latency: 0 00:18:24.665 Relative Write Throughput: 0 00:18:24.665 Relative Write Latency: 0 00:18:24.665 Idle Power: Not Reported 00:18:24.665 Active Power: Not Reported 00:18:24.665 Non-Operational Permissive Mode: Not Supported 00:18:24.665 00:18:24.665 Health Information 00:18:24.665 ================== 00:18:24.665 Critical Warnings: 00:18:24.665 Available Spare Space: OK 00:18:24.665 Temperature: OK 00:18:24.665 Device Reliability: OK 00:18:24.665 Read Only: No 00:18:24.665 Volatile Memory Backup: OK 00:18:24.665 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:24.665 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:24.665 Available Spare: 0% 00:18:24.665 Available Sp[2024-12-14 16:31:54.646675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:24.665 [2024-12-14 16:31:54.654561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:24.665 [2024-12-14 16:31:54.654591] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:24.665 [2024-12-14 16:31:54.654599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.665 [2024-12-14 16:31:54.654605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.665 [2024-12-14 16:31:54.654610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.665 [2024-12-14 16:31:54.654615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.665 [2024-12-14 16:31:54.654657] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:24.665 [2024-12-14 16:31:54.654667] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:24.665 [2024-12-14 16:31:54.655663] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:24.665 [2024-12-14 16:31:54.655705] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:24.665 [2024-12-14 16:31:54.655711] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:24.665 [2024-12-14 16:31:54.656673] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:24.665 [2024-12-14 16:31:54.656683] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:24.665 [2024-12-14 16:31:54.656732] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:24.665 [2024-12-14 16:31:54.659561] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:24.665 are Threshold: 0% 00:18:24.665 Life Percentage Used: 0% 00:18:24.665 Data Units Read: 0 00:18:24.665 Data Units Written: 0 00:18:24.665 Host Read Commands: 0 00:18:24.665 Host Write Commands: 0 00:18:24.665 Controller Busy Time: 0 minutes 00:18:24.665 Power Cycles: 0 00:18:24.665 Power On Hours: 0 hours 00:18:24.665 Unsafe Shutdowns: 0 00:18:24.665 Unrecoverable Media Errors: 0 00:18:24.665 Lifetime Error Log Entries: 0 00:18:24.665 Warning Temperature Time: 0 minutes 00:18:24.665 Critical Temperature Time: 0 minutes 00:18:24.665 00:18:24.665 Number of Queues 00:18:24.665 ================ 00:18:24.665 Number of I/O Submission Queues: 127 00:18:24.665 Number of I/O Completion Queues: 127 00:18:24.665 00:18:24.665 Active Namespaces 00:18:24.665 ================= 00:18:24.665 Namespace ID:1 00:18:24.665 Error Recovery Timeout: Unlimited 00:18:24.665 Command Set Identifier: NVM (00h) 00:18:24.665 Deallocate: Supported 00:18:24.665 Deallocated/Unwritten Error: Not Supported 00:18:24.665 Deallocated Read Value: Unknown 00:18:24.665 Deallocate in Write Zeroes: Not Supported 00:18:24.665 Deallocated Guard Field: 0xFFFF 00:18:24.665 Flush: Supported 00:18:24.665 Reservation: Supported 00:18:24.665 Namespace Sharing Capabilities: Multiple Controllers 00:18:24.665 Size (in LBAs): 131072 (0GiB) 00:18:24.665 Capacity (in LBAs): 131072 (0GiB) 00:18:24.665 Utilization (in LBAs): 131072 (0GiB) 00:18:24.665 NGUID: CBF6EDD8586C41D68B81B997A7E6B379 00:18:24.665 UUID: cbf6edd8-586c-41d6-8b81-b997a7e6b379 00:18:24.665 Thin Provisioning: Not Supported 00:18:24.665 Per-NS Atomic Units: Yes 00:18:24.665 Atomic Boundary Size (Normal): 0 00:18:24.665 Atomic Boundary Size (PFail): 0 00:18:24.665 Atomic Boundary Offset: 0 00:18:24.665 Maximum Single Source Range Length: 65535 00:18:24.665 Maximum Copy Length: 65535 00:18:24.665 Maximum Source Range Count: 1 00:18:24.665 NGUID/EUI64 Never Reused: No 00:18:24.665 Namespace Write Protected: No 00:18:24.665 Number of LBA Formats: 1 00:18:24.665 Current LBA Format: LBA Format #00 00:18:24.665 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:24.665 00:18:24.665 16:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:24.923 [2024-12-14 16:31:54.889799] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.188 Initializing NVMe Controllers 00:18:30.188 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:30.188 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:30.188 Initialization complete. Launching workers. 00:18:30.188 ======================================================== 00:18:30.188 Latency(us) 00:18:30.188 Device Information : IOPS MiB/s Average min max 00:18:30.188 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39940.95 156.02 3205.07 957.73 6655.57 00:18:30.188 ======================================================== 00:18:30.188 Total : 39940.95 156.02 3205.07 957.73 6655.57 00:18:30.188 00:18:30.188 [2024-12-14 16:31:59.991822] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.188 16:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:30.188 [2024-12-14 16:32:00.230569] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:35.453 Initializing NVMe Controllers 00:18:35.453 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:35.453 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:35.453 Initialization complete. Launching workers. 00:18:35.453 ======================================================== 00:18:35.453 Latency(us) 00:18:35.453 Device Information : IOPS MiB/s Average min max 00:18:35.453 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39896.84 155.85 3207.86 978.14 10307.30 00:18:35.453 ======================================================== 00:18:35.453 Total : 39896.84 155.85 3207.86 978.14 10307.30 00:18:35.453 00:18:35.453 [2024-12-14 16:32:05.249434] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:35.453 16:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:35.453 [2024-12-14 16:32:05.462653] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:40.720 [2024-12-14 16:32:10.605652] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:40.720 Initializing NVMe Controllers 00:18:40.720 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:40.720 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:40.720 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:40.720 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:40.720 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:40.720 Initialization complete. Launching workers. 00:18:40.720 Starting thread on core 2 00:18:40.720 Starting thread on core 3 00:18:40.720 Starting thread on core 1 00:18:40.720 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:40.979 [2024-12-14 16:32:10.896066] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:44.266 [2024-12-14 16:32:13.960760] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:44.266 Initializing NVMe Controllers 00:18:44.266 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:44.266 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:44.266 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:44.266 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:44.266 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:44.266 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:44.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:44.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:44.266 Initialization complete. Launching workers. 00:18:44.266 Starting thread on core 1 with urgent priority queue 00:18:44.266 Starting thread on core 2 with urgent priority queue 00:18:44.266 Starting thread on core 3 with urgent priority queue 00:18:44.266 Starting thread on core 0 with urgent priority queue 00:18:44.266 SPDK bdev Controller (SPDK2 ) core 0: 10005.33 IO/s 9.99 secs/100000 ios 00:18:44.266 SPDK bdev Controller (SPDK2 ) core 1: 7296.67 IO/s 13.70 secs/100000 ios 00:18:44.266 SPDK bdev Controller (SPDK2 ) core 2: 8360.33 IO/s 11.96 secs/100000 ios 00:18:44.266 SPDK bdev Controller (SPDK2 ) core 3: 7512.00 IO/s 13.31 secs/100000 ios 00:18:44.266 ======================================================== 00:18:44.266 00:18:44.266 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:44.266 [2024-12-14 16:32:14.245994] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:44.266 Initializing NVMe Controllers 00:18:44.266 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:44.266 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:44.266 Namespace ID: 1 size: 0GB 00:18:44.266 Initialization complete. 00:18:44.266 INFO: using host memory buffer for IO 00:18:44.266 Hello world! 00:18:44.266 [2024-12-14 16:32:14.258066] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:44.266 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:44.524 [2024-12-14 16:32:14.536933] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:45.898 Initializing NVMe Controllers 00:18:45.898 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:45.898 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:45.898 Initialization complete. Launching workers. 00:18:45.898 submit (in ns) avg, min, max = 7495.7, 3124.8, 3999435.2 00:18:45.898 complete (in ns) avg, min, max = 19469.0, 1725.7, 4006562.9 00:18:45.898 00:18:45.898 Submit histogram 00:18:45.898 ================ 00:18:45.898 Range in us Cumulative Count 00:18:45.898 3.124 - 3.139: 0.0183% ( 3) 00:18:45.898 3.139 - 3.154: 0.0488% ( 5) 00:18:45.898 3.154 - 3.170: 0.0549% ( 1) 00:18:45.898 3.170 - 3.185: 0.1707% ( 19) 00:18:45.898 3.185 - 3.200: 1.3596% ( 195) 00:18:45.898 3.200 - 3.215: 5.1091% ( 615) 00:18:45.898 3.215 - 3.230: 11.3401% ( 1022) 00:18:45.898 3.230 - 3.246: 17.2052% ( 962) 00:18:45.898 3.246 - 3.261: 24.5946% ( 1212) 00:18:45.898 3.261 - 3.276: 32.1180% ( 1234) 00:18:45.898 3.276 - 3.291: 38.7453% ( 1087) 00:18:45.898 3.291 - 3.307: 43.7081% ( 814) 00:18:45.898 3.307 - 3.322: 48.4941% ( 785) 00:18:45.898 3.322 - 3.337: 52.4509% ( 649) 00:18:45.898 3.337 - 3.352: 55.6152% ( 519) 00:18:45.898 3.352 - 3.368: 62.3339% ( 1102) 00:18:45.898 3.368 - 3.383: 69.1013% ( 1110) 00:18:45.898 3.383 - 3.398: 74.3141% ( 855) 00:18:45.898 3.398 - 3.413: 79.9232% ( 920) 00:18:45.898 3.413 - 3.429: 83.4715% ( 582) 00:18:45.898 3.429 - 3.444: 85.9773% ( 411) 00:18:45.898 3.444 - 3.459: 87.1113% ( 186) 00:18:45.898 3.459 - 3.474: 87.6296% ( 85) 00:18:45.898 3.474 - 3.490: 88.0624% ( 71) 00:18:45.898 3.490 - 3.505: 88.5990% ( 88) 00:18:45.898 3.505 - 3.520: 89.2757% ( 111) 00:18:45.898 3.520 - 3.535: 90.1719% ( 147) 00:18:45.898 3.535 - 3.550: 91.3303% ( 190) 00:18:45.898 3.550 - 3.566: 92.2266% ( 147) 00:18:45.898 3.566 - 3.581: 93.1655% ( 154) 00:18:45.898 3.581 - 3.596: 93.8422% ( 111) 00:18:45.898 3.596 - 3.611: 94.6165% ( 127) 00:18:45.898 3.611 - 3.627: 95.6103% ( 163) 00:18:45.898 3.627 - 3.642: 96.4029% ( 130) 00:18:45.898 3.642 - 3.657: 97.2077% ( 132) 00:18:45.898 3.657 - 3.672: 97.7015% ( 81) 00:18:45.898 3.672 - 3.688: 98.2014% ( 82) 00:18:45.898 3.688 - 3.703: 98.5977% ( 65) 00:18:45.898 3.703 - 3.718: 98.9087% ( 51) 00:18:45.898 3.718 - 3.733: 99.1221% ( 35) 00:18:45.898 3.733 - 3.749: 99.3050% ( 30) 00:18:45.898 3.749 - 3.764: 99.4269% ( 20) 00:18:45.898 3.764 - 3.779: 99.4818% ( 9) 00:18:45.898 3.779 - 3.794: 99.5184% ( 6) 00:18:45.898 3.794 - 3.810: 99.5427% ( 4) 00:18:45.898 3.810 - 3.825: 99.5549% ( 2) 00:18:45.898 3.825 - 3.840: 99.5610% ( 1) 00:18:45.898 3.840 - 3.855: 99.5671% ( 1) 00:18:45.898 3.855 - 3.870: 99.5732% ( 1) 00:18:45.898 3.870 - 3.886: 99.5854% ( 2) 00:18:45.898 3.886 - 3.901: 99.5976% ( 2) 00:18:45.898 3.901 - 3.931: 99.6037% ( 1) 00:18:45.898 3.931 - 3.962: 99.6098% ( 1) 00:18:45.898 3.992 - 4.023: 99.6159% ( 1) 00:18:45.898 4.023 - 4.053: 99.6281% ( 2) 00:18:45.898 4.084 - 4.114: 99.6342% ( 1) 00:18:45.898 4.114 - 4.145: 99.6464% ( 2) 00:18:45.898 4.145 - 4.175: 99.6525% ( 1) 00:18:45.898 4.236 - 4.267: 99.6586% ( 1) 00:18:45.898 4.389 - 4.419: 99.6647% ( 1) 00:18:45.898 4.785 - 4.815: 99.6708% ( 1) 00:18:45.898 4.998 - 5.029: 99.6769% ( 1) 00:18:45.898 5.029 - 5.059: 99.6830% ( 1) 00:18:45.898 5.090 - 5.120: 99.6891% ( 1) 00:18:45.898 5.120 - 5.150: 99.6952% ( 1) 00:18:45.898 5.303 - 5.333: 99.7013% ( 1) 00:18:45.898 5.425 - 5.455: 99.7134% ( 2) 00:18:45.898 5.516 - 5.547: 99.7195% ( 1) 00:18:45.898 5.608 - 5.638: 99.7256% ( 1) 00:18:45.898 5.638 - 5.669: 99.7317% ( 1) 00:18:45.898 5.699 - 5.730: 99.7439% ( 2) 00:18:45.898 5.730 - 5.760: 99.7500% ( 1) 00:18:45.898 5.790 - 5.821: 99.7561% ( 1) 00:18:45.898 5.882 - 5.912: 99.7683% ( 2) 00:18:45.898 5.912 - 5.943: 99.7744% ( 1) 00:18:45.898 5.943 - 5.973: 99.7805% ( 1) 00:18:45.898 6.187 - 6.217: 99.7866% ( 1) 00:18:45.898 6.217 - 6.248: 99.7927% ( 1) 00:18:45.898 6.309 - 6.339: 99.7988% ( 1) 00:18:45.898 [2024-12-14 16:32:15.638548] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:45.898 6.461 - 6.491: 99.8049% ( 1) 00:18:45.898 6.491 - 6.522: 99.8171% ( 2) 00:18:45.898 6.552 - 6.583: 99.8232% ( 1) 00:18:45.898 6.583 - 6.613: 99.8354% ( 2) 00:18:45.898 7.162 - 7.192: 99.8415% ( 1) 00:18:45.898 7.406 - 7.436: 99.8476% ( 1) 00:18:45.898 7.497 - 7.528: 99.8537% ( 1) 00:18:45.898 7.528 - 7.558: 99.8598% ( 1) 00:18:45.898 7.589 - 7.619: 99.8659% ( 1) 00:18:45.898 7.741 - 7.771: 99.8720% ( 1) 00:18:45.898 7.924 - 7.985: 99.8781% ( 1) 00:18:45.898 7.985 - 8.046: 99.8842% ( 1) 00:18:45.898 8.411 - 8.472: 99.8903% ( 1) 00:18:45.898 10.118 - 10.179: 99.8964% ( 1) 00:18:45.898 3994.575 - 4025.783: 100.0000% ( 17) 00:18:45.898 00:18:45.898 Complete histogram 00:18:45.898 ================== 00:18:45.898 Range in us Cumulative Count 00:18:45.898 1.722 - 1.730: 0.0183% ( 3) 00:18:45.898 1.730 - 1.737: 0.0488% ( 5) 00:18:45.898 1.737 - 1.745: 0.0975% ( 8) 00:18:45.898 1.745 - 1.752: 0.1097% ( 2) 00:18:45.898 1.760 - 1.768: 0.1951% ( 14) 00:18:45.898 1.768 - 1.775: 1.7132% ( 249) 00:18:45.898 1.775 - 1.783: 10.3768% ( 1421) 00:18:45.898 1.783 - 1.790: 31.1730% ( 3411) 00:18:45.898 1.790 - 1.798: 50.3536% ( 3146) 00:18:45.898 1.798 - 1.806: 57.7003% ( 1205) 00:18:45.898 1.806 - 1.813: 60.2000% ( 410) 00:18:45.898 1.813 - 1.821: 61.8461% ( 270) 00:18:45.898 1.821 - 1.829: 64.4251% ( 423) 00:18:45.898 1.829 - 1.836: 72.1741% ( 1271) 00:18:45.898 1.836 - 1.844: 83.4715% ( 1853) 00:18:45.898 1.844 - 1.851: 91.6230% ( 1337) 00:18:45.898 1.851 - 1.859: 94.6287% ( 493) 00:18:45.898 1.859 - 1.867: 96.1712% ( 253) 00:18:45.898 1.867 - 1.874: 97.3235% ( 189) 00:18:45.898 1.874 - 1.882: 97.9149% ( 97) 00:18:45.898 1.882 - 1.890: 98.1466% ( 38) 00:18:45.898 1.890 - 1.897: 98.3112% ( 27) 00:18:45.898 1.897 - 1.905: 98.4514% ( 23) 00:18:45.898 1.905 - 1.912: 98.7258% ( 45) 00:18:45.898 1.912 - 1.920: 98.9087% ( 30) 00:18:45.898 1.920 - 1.928: 99.0733% ( 27) 00:18:45.898 1.928 - 1.935: 99.1891% ( 19) 00:18:45.898 1.935 - 1.943: 99.2379% ( 8) 00:18:45.898 1.943 - 1.950: 99.2562% ( 3) 00:18:45.898 1.950 - 1.966: 99.2806% ( 4) 00:18:45.898 1.966 - 1.981: 99.2867% ( 1) 00:18:45.898 1.981 - 1.996: 99.3111% ( 4) 00:18:45.898 1.996 - 2.011: 99.3172% ( 1) 00:18:45.898 2.011 - 2.027: 99.3233% ( 1) 00:18:45.898 2.042 - 2.057: 99.3294% ( 1) 00:18:45.898 2.057 - 2.072: 99.3415% ( 2) 00:18:45.898 2.072 - 2.088: 99.3537% ( 2) 00:18:45.898 2.103 - 2.118: 99.3659% ( 2) 00:18:45.898 2.118 - 2.133: 99.3781% ( 2) 00:18:45.898 2.133 - 2.149: 99.3842% ( 1) 00:18:45.898 2.164 - 2.179: 99.4025% ( 3) 00:18:45.898 2.225 - 2.240: 99.4086% ( 1) 00:18:45.898 2.758 - 2.773: 99.4147% ( 1) 00:18:45.898 3.962 - 3.992: 99.4208% ( 1) 00:18:45.899 3.992 - 4.023: 99.4269% ( 1) 00:18:45.899 4.023 - 4.053: 99.4330% ( 1) 00:18:45.899 4.053 - 4.084: 99.4391% ( 1) 00:18:45.899 4.084 - 4.114: 99.4452% ( 1) 00:18:45.899 4.206 - 4.236: 99.4513% ( 1) 00:18:45.899 4.328 - 4.358: 99.4635% ( 2) 00:18:45.899 4.450 - 4.480: 99.4696% ( 1) 00:18:45.899 4.846 - 4.876: 99.4757% ( 1) 00:18:45.899 4.968 - 4.998: 99.4818% ( 1) 00:18:45.899 5.090 - 5.120: 99.4879% ( 1) 00:18:45.899 5.150 - 5.181: 99.4940% ( 1) 00:18:45.899 5.181 - 5.211: 99.5001% ( 1) 00:18:45.899 5.242 - 5.272: 99.5062% ( 1) 00:18:45.899 5.272 - 5.303: 99.5123% ( 1) 00:18:45.899 5.608 - 5.638: 99.5184% ( 1) 00:18:45.899 5.790 - 5.821: 99.5244% ( 1) 00:18:45.899 5.851 - 5.882: 99.5305% ( 1) 00:18:45.899 7.924 - 7.985: 99.5366% ( 1) 00:18:45.899 12.556 - 12.617: 99.5427% ( 1) 00:18:45.899 38.766 - 39.010: 99.5488% ( 1) 00:18:45.899 998.644 - 1006.446: 99.5549% ( 1) 00:18:45.899 2168.930 - 2184.533: 99.5610% ( 1) 00:18:45.899 2746.270 - 2761.874: 99.5671% ( 1) 00:18:45.899 3994.575 - 4025.783: 100.0000% ( 71) 00:18:45.899 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:45.899 [ 00:18:45.899 { 00:18:45.899 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:45.899 "subtype": "Discovery", 00:18:45.899 "listen_addresses": [], 00:18:45.899 "allow_any_host": true, 00:18:45.899 "hosts": [] 00:18:45.899 }, 00:18:45.899 { 00:18:45.899 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:45.899 "subtype": "NVMe", 00:18:45.899 "listen_addresses": [ 00:18:45.899 { 00:18:45.899 "trtype": "VFIOUSER", 00:18:45.899 "adrfam": "IPv4", 00:18:45.899 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:45.899 "trsvcid": "0" 00:18:45.899 } 00:18:45.899 ], 00:18:45.899 "allow_any_host": true, 00:18:45.899 "hosts": [], 00:18:45.899 "serial_number": "SPDK1", 00:18:45.899 "model_number": "SPDK bdev Controller", 00:18:45.899 "max_namespaces": 32, 00:18:45.899 "min_cntlid": 1, 00:18:45.899 "max_cntlid": 65519, 00:18:45.899 "namespaces": [ 00:18:45.899 { 00:18:45.899 "nsid": 1, 00:18:45.899 "bdev_name": "Malloc1", 00:18:45.899 "name": "Malloc1", 00:18:45.899 "nguid": "2D420FDFFFFD4FD287F460C6F721CA8A", 00:18:45.899 "uuid": "2d420fdf-fffd-4fd2-87f4-60c6f721ca8a" 00:18:45.899 }, 00:18:45.899 { 00:18:45.899 "nsid": 2, 00:18:45.899 "bdev_name": "Malloc3", 00:18:45.899 "name": "Malloc3", 00:18:45.899 "nguid": "3F0F5FC59B1A40AB94387EE6148D0BE6", 00:18:45.899 "uuid": "3f0f5fc5-9b1a-40ab-9438-7ee6148d0be6" 00:18:45.899 } 00:18:45.899 ] 00:18:45.899 }, 00:18:45.899 { 00:18:45.899 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:45.899 "subtype": "NVMe", 00:18:45.899 "listen_addresses": [ 00:18:45.899 { 00:18:45.899 "trtype": "VFIOUSER", 00:18:45.899 "adrfam": "IPv4", 00:18:45.899 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:45.899 "trsvcid": "0" 00:18:45.899 } 00:18:45.899 ], 00:18:45.899 "allow_any_host": true, 00:18:45.899 "hosts": [], 00:18:45.899 "serial_number": "SPDK2", 00:18:45.899 "model_number": "SPDK bdev Controller", 00:18:45.899 "max_namespaces": 32, 00:18:45.899 "min_cntlid": 1, 00:18:45.899 "max_cntlid": 65519, 00:18:45.899 "namespaces": [ 00:18:45.899 { 00:18:45.899 "nsid": 1, 00:18:45.899 "bdev_name": "Malloc2", 00:18:45.899 "name": "Malloc2", 00:18:45.899 "nguid": "CBF6EDD8586C41D68B81B997A7E6B379", 00:18:45.899 "uuid": "cbf6edd8-586c-41d6-8b81-b997a7e6b379" 00:18:45.899 } 00:18:45.899 ] 00:18:45.899 } 00:18:45.899 ] 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=972655 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:45.899 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:46.158 [2024-12-14 16:32:16.042511] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:46.158 Malloc4 00:18:46.158 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:46.416 [2024-12-14 16:32:16.277342] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:46.416 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:46.416 Asynchronous Event Request test 00:18:46.416 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:46.416 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:46.416 Registering asynchronous event callbacks... 00:18:46.416 Starting namespace attribute notice tests for all controllers... 00:18:46.416 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:46.416 aer_cb - Changed Namespace 00:18:46.416 Cleaning up... 00:18:46.416 [ 00:18:46.416 { 00:18:46.416 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:46.416 "subtype": "Discovery", 00:18:46.416 "listen_addresses": [], 00:18:46.416 "allow_any_host": true, 00:18:46.416 "hosts": [] 00:18:46.416 }, 00:18:46.416 { 00:18:46.416 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:46.416 "subtype": "NVMe", 00:18:46.416 "listen_addresses": [ 00:18:46.416 { 00:18:46.416 "trtype": "VFIOUSER", 00:18:46.416 "adrfam": "IPv4", 00:18:46.416 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:46.416 "trsvcid": "0" 00:18:46.416 } 00:18:46.416 ], 00:18:46.416 "allow_any_host": true, 00:18:46.416 "hosts": [], 00:18:46.416 "serial_number": "SPDK1", 00:18:46.416 "model_number": "SPDK bdev Controller", 00:18:46.416 "max_namespaces": 32, 00:18:46.416 "min_cntlid": 1, 00:18:46.416 "max_cntlid": 65519, 00:18:46.416 "namespaces": [ 00:18:46.416 { 00:18:46.416 "nsid": 1, 00:18:46.416 "bdev_name": "Malloc1", 00:18:46.416 "name": "Malloc1", 00:18:46.416 "nguid": "2D420FDFFFFD4FD287F460C6F721CA8A", 00:18:46.416 "uuid": "2d420fdf-fffd-4fd2-87f4-60c6f721ca8a" 00:18:46.416 }, 00:18:46.416 { 00:18:46.416 "nsid": 2, 00:18:46.416 "bdev_name": "Malloc3", 00:18:46.416 "name": "Malloc3", 00:18:46.417 "nguid": "3F0F5FC59B1A40AB94387EE6148D0BE6", 00:18:46.417 "uuid": "3f0f5fc5-9b1a-40ab-9438-7ee6148d0be6" 00:18:46.417 } 00:18:46.417 ] 00:18:46.417 }, 00:18:46.417 { 00:18:46.417 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:46.417 "subtype": "NVMe", 00:18:46.417 "listen_addresses": [ 00:18:46.417 { 00:18:46.417 "trtype": "VFIOUSER", 00:18:46.417 "adrfam": "IPv4", 00:18:46.417 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:46.417 "trsvcid": "0" 00:18:46.417 } 00:18:46.417 ], 00:18:46.417 "allow_any_host": true, 00:18:46.417 "hosts": [], 00:18:46.417 "serial_number": "SPDK2", 00:18:46.417 "model_number": "SPDK bdev Controller", 00:18:46.417 "max_namespaces": 32, 00:18:46.417 "min_cntlid": 1, 00:18:46.417 "max_cntlid": 65519, 00:18:46.417 "namespaces": [ 00:18:46.417 { 00:18:46.417 "nsid": 1, 00:18:46.417 "bdev_name": "Malloc2", 00:18:46.417 "name": "Malloc2", 00:18:46.417 "nguid": "CBF6EDD8586C41D68B81B997A7E6B379", 00:18:46.417 "uuid": "cbf6edd8-586c-41d6-8b81-b997a7e6b379" 00:18:46.417 }, 00:18:46.417 { 00:18:46.417 "nsid": 2, 00:18:46.417 "bdev_name": "Malloc4", 00:18:46.417 "name": "Malloc4", 00:18:46.417 "nguid": "67951C09D0674D23A195942DFE1268BC", 00:18:46.417 "uuid": "67951c09-d067-4d23-a195-942dfe1268bc" 00:18:46.417 } 00:18:46.417 ] 00:18:46.417 } 00:18:46.417 ] 00:18:46.417 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 972655 00:18:46.417 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:46.417 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 965144 00:18:46.417 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 965144 ']' 00:18:46.417 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 965144 00:18:46.675 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:46.675 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.675 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 965144 00:18:46.675 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.675 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.676 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 965144' 00:18:46.676 killing process with pid 965144 00:18:46.676 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 965144 00:18:46.676 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 965144 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=972783 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 972783' 00:18:46.934 Process pid: 972783 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 972783 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 972783 ']' 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.934 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:46.934 [2024-12-14 16:32:16.832051] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:46.934 [2024-12-14 16:32:16.832944] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:46.934 [2024-12-14 16:32:16.832981] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.934 [2024-12-14 16:32:16.907986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.934 [2024-12-14 16:32:16.930204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.934 [2024-12-14 16:32:16.930244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.934 [2024-12-14 16:32:16.930251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.934 [2024-12-14 16:32:16.930257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.934 [2024-12-14 16:32:16.930262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.934 [2024-12-14 16:32:16.931538] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.934 [2024-12-14 16:32:16.931604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.934 [2024-12-14 16:32:16.931621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.934 [2024-12-14 16:32:16.931623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.934 [2024-12-14 16:32:16.995144] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:46.935 [2024-12-14 16:32:16.995352] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:46.935 [2024-12-14 16:32:16.995567] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:46.935 [2024-12-14 16:32:16.995799] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:46.935 [2024-12-14 16:32:16.995972] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:47.193 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.193 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:47.193 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:48.129 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:48.389 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:48.389 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:48.389 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:48.389 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:48.389 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:48.389 Malloc1 00:18:48.647 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:48.647 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:48.906 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:49.164 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:49.164 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:49.164 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:49.423 Malloc2 00:18:49.423 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:49.682 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:49.682 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 972783 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 972783 ']' 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 972783 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 972783 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 972783' 00:18:49.940 killing process with pid 972783 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 972783 00:18:49.940 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 972783 00:18:50.198 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:50.198 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:50.198 00:18:50.198 real 0m50.757s 00:18:50.198 user 3m16.364s 00:18:50.198 sys 0m3.208s 00:18:50.198 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.198 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:50.198 ************************************ 00:18:50.198 END TEST nvmf_vfio_user 00:18:50.198 ************************************ 00:18:50.198 16:32:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:50.198 16:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:50.198 16:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.198 16:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:50.198 ************************************ 00:18:50.198 START TEST nvmf_vfio_user_nvme_compliance 00:18:50.198 ************************************ 00:18:50.198 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:50.458 * Looking for test storage... 00:18:50.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:50.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.458 --rc genhtml_branch_coverage=1 00:18:50.458 --rc genhtml_function_coverage=1 00:18:50.458 --rc genhtml_legend=1 00:18:50.458 --rc geninfo_all_blocks=1 00:18:50.458 --rc geninfo_unexecuted_blocks=1 00:18:50.458 00:18:50.458 ' 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:50.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.458 --rc genhtml_branch_coverage=1 00:18:50.458 --rc genhtml_function_coverage=1 00:18:50.458 --rc genhtml_legend=1 00:18:50.458 --rc geninfo_all_blocks=1 00:18:50.458 --rc geninfo_unexecuted_blocks=1 00:18:50.458 00:18:50.458 ' 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:50.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.458 --rc genhtml_branch_coverage=1 00:18:50.458 --rc genhtml_function_coverage=1 00:18:50.458 --rc genhtml_legend=1 00:18:50.458 --rc geninfo_all_blocks=1 00:18:50.458 --rc geninfo_unexecuted_blocks=1 00:18:50.458 00:18:50.458 ' 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:50.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.458 --rc genhtml_branch_coverage=1 00:18:50.458 --rc genhtml_function_coverage=1 00:18:50.458 --rc genhtml_legend=1 00:18:50.458 --rc geninfo_all_blocks=1 00:18:50.458 --rc geninfo_unexecuted_blocks=1 00:18:50.458 00:18:50.458 ' 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.458 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:50.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=973529 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 973529' 00:18:50.459 Process pid: 973529 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 973529 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 973529 ']' 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.459 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:50.459 [2024-12-14 16:32:20.474755] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:50.459 [2024-12-14 16:32:20.474802] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.717 [2024-12-14 16:32:20.552491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:50.717 [2024-12-14 16:32:20.581502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.717 [2024-12-14 16:32:20.581546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.717 [2024-12-14 16:32:20.581563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.717 [2024-12-14 16:32:20.581573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.717 [2024-12-14 16:32:20.581580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.717 [2024-12-14 16:32:20.583174] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.717 [2024-12-14 16:32:20.583214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.717 [2024-12-14 16:32:20.583214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.717 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.718 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:50.718 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:51.654 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:51.654 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:51.654 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:51.654 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.654 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.654 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.654 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:51.654 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:51.654 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.654 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.913 malloc0 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.913 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:51.913 00:18:51.913 00:18:51.913 CUnit - A unit testing framework for C - Version 2.1-3 00:18:51.913 http://cunit.sourceforge.net/ 00:18:51.913 00:18:51.913 00:18:51.913 Suite: nvme_compliance 00:18:51.913 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-14 16:32:21.932056] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:51.913 [2024-12-14 16:32:21.933397] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:51.913 [2024-12-14 16:32:21.933411] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:51.913 [2024-12-14 16:32:21.933417] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:51.913 [2024-12-14 16:32:21.936080] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:51.913 passed 00:18:52.172 Test: admin_identify_ctrlr_verify_fused ...[2024-12-14 16:32:22.014617] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.172 [2024-12-14 16:32:22.017635] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.172 passed 00:18:52.172 Test: admin_identify_ns ...[2024-12-14 16:32:22.096213] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.172 [2024-12-14 16:32:22.156570] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:52.172 [2024-12-14 16:32:22.164576] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:52.172 [2024-12-14 16:32:22.185649] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.172 passed 00:18:52.430 Test: admin_get_features_mandatory_features ...[2024-12-14 16:32:22.259369] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.430 [2024-12-14 16:32:22.264401] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.430 passed 00:18:52.430 Test: admin_get_features_optional_features ...[2024-12-14 16:32:22.338896] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.430 [2024-12-14 16:32:22.341909] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.430 passed 00:18:52.430 Test: admin_set_features_number_of_queues ...[2024-12-14 16:32:22.418676] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.689 [2024-12-14 16:32:22.521650] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.689 passed 00:18:52.689 Test: admin_get_log_page_mandatory_logs ...[2024-12-14 16:32:22.598675] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.689 [2024-12-14 16:32:22.601706] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.689 passed 00:18:52.689 Test: admin_get_log_page_with_lpo ...[2024-12-14 16:32:22.677726] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.689 [2024-12-14 16:32:22.749569] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:52.689 [2024-12-14 16:32:22.762622] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.948 passed 00:18:52.948 Test: fabric_property_get ...[2024-12-14 16:32:22.836354] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.948 [2024-12-14 16:32:22.837594] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:52.948 [2024-12-14 16:32:22.839375] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.948 passed 00:18:52.948 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-14 16:32:22.916874] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:52.948 [2024-12-14 16:32:22.918101] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:52.948 [2024-12-14 16:32:22.919899] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:52.948 passed 00:18:52.948 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-14 16:32:22.996536] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.207 [2024-12-14 16:32:23.077569] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:53.207 [2024-12-14 16:32:23.093573] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:53.207 [2024-12-14 16:32:23.098639] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.207 passed 00:18:53.207 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-14 16:32:23.175332] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.207 [2024-12-14 16:32:23.176552] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:53.207 [2024-12-14 16:32:23.178347] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.207 passed 00:18:53.207 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-14 16:32:23.254760] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.465 [2024-12-14 16:32:23.334563] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:53.465 [2024-12-14 16:32:23.358566] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:53.465 [2024-12-14 16:32:23.362795] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.465 passed 00:18:53.465 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-14 16:32:23.436451] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.465 [2024-12-14 16:32:23.437685] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:53.465 [2024-12-14 16:32:23.437710] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:53.465 [2024-12-14 16:32:23.441484] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.465 passed 00:18:53.465 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-14 16:32:23.516720] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.724 [2024-12-14 16:32:23.612569] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:53.724 [2024-12-14 16:32:23.620566] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:53.724 [2024-12-14 16:32:23.628563] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:53.724 [2024-12-14 16:32:23.636567] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:53.724 [2024-12-14 16:32:23.665639] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.724 passed 00:18:53.724 Test: admin_create_io_sq_verify_pc ...[2024-12-14 16:32:23.741270] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:53.724 [2024-12-14 16:32:23.764570] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:53.724 [2024-12-14 16:32:23.782488] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:53.983 passed 00:18:53.983 Test: admin_create_io_qp_max_qps ...[2024-12-14 16:32:23.856993] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:54.919 [2024-12-14 16:32:24.965567] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:55.488 [2024-12-14 16:32:25.340185] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.488 passed 00:18:55.488 Test: admin_create_io_sq_shared_cq ...[2024-12-14 16:32:25.416405] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.488 [2024-12-14 16:32:25.548571] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:55.746 [2024-12-14 16:32:25.585623] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.746 passed 00:18:55.746 00:18:55.746 Run Summary: Type Total Ran Passed Failed Inactive 00:18:55.746 suites 1 1 n/a 0 0 00:18:55.746 tests 18 18 18 0 0 00:18:55.746 asserts 360 360 360 0 n/a 00:18:55.746 00:18:55.746 Elapsed time = 1.499 seconds 00:18:55.746 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 973529 00:18:55.747 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 973529 ']' 00:18:55.747 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 973529 00:18:55.747 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:55.747 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.747 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973529 00:18:55.747 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.747 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.747 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973529' 00:18:55.747 killing process with pid 973529 00:18:55.747 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 973529 00:18:55.747 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 973529 00:18:56.005 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:56.005 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:56.005 00:18:56.005 real 0m5.647s 00:18:56.005 user 0m15.781s 00:18:56.005 sys 0m0.521s 00:18:56.005 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.005 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:56.005 ************************************ 00:18:56.005 END TEST nvmf_vfio_user_nvme_compliance 00:18:56.005 ************************************ 00:18:56.005 16:32:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:56.005 16:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:56.005 16:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.005 16:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:56.005 ************************************ 00:18:56.005 START TEST nvmf_vfio_user_fuzz 00:18:56.005 ************************************ 00:18:56.006 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:56.006 * Looking for test storage... 00:18:56.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.006 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:56.006 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:18:56.006 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:56.265 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:56.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.266 --rc genhtml_branch_coverage=1 00:18:56.266 --rc genhtml_function_coverage=1 00:18:56.266 --rc genhtml_legend=1 00:18:56.266 --rc geninfo_all_blocks=1 00:18:56.266 --rc geninfo_unexecuted_blocks=1 00:18:56.266 00:18:56.266 ' 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:56.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.266 --rc genhtml_branch_coverage=1 00:18:56.266 --rc genhtml_function_coverage=1 00:18:56.266 --rc genhtml_legend=1 00:18:56.266 --rc geninfo_all_blocks=1 00:18:56.266 --rc geninfo_unexecuted_blocks=1 00:18:56.266 00:18:56.266 ' 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:56.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.266 --rc genhtml_branch_coverage=1 00:18:56.266 --rc genhtml_function_coverage=1 00:18:56.266 --rc genhtml_legend=1 00:18:56.266 --rc geninfo_all_blocks=1 00:18:56.266 --rc geninfo_unexecuted_blocks=1 00:18:56.266 00:18:56.266 ' 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:56.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.266 --rc genhtml_branch_coverage=1 00:18:56.266 --rc genhtml_function_coverage=1 00:18:56.266 --rc genhtml_legend=1 00:18:56.266 --rc geninfo_all_blocks=1 00:18:56.266 --rc geninfo_unexecuted_blocks=1 00:18:56.266 00:18:56.266 ' 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=974491 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 974491' 00:18:56.266 Process pid: 974491 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 974491 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 974491 ']' 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.266 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:56.525 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.525 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:56.525 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:57.461 malloc0 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:57.461 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:29.553 Fuzzing completed. Shutting down the fuzz application 00:19:29.553 00:19:29.553 Dumping successful admin opcodes: 00:19:29.553 9, 10, 00:19:29.553 Dumping successful io opcodes: 00:19:29.553 0, 00:19:29.553 NS: 0x20000081ef00 I/O qp, Total commands completed: 1022237, total successful commands: 4020, random_seed: 1219319680 00:19:29.554 NS: 0x20000081ef00 admin qp, Total commands completed: 252016, total successful commands: 59, random_seed: 1967838400 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 974491 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 974491 ']' 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 974491 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 974491 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 974491' 00:19:29.554 killing process with pid 974491 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 974491 00:19:29.554 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 974491 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:29.554 00:19:29.554 real 0m32.176s 00:19:29.554 user 0m29.887s 00:19:29.554 sys 0m30.871s 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:29.554 ************************************ 00:19:29.554 END TEST nvmf_vfio_user_fuzz 00:19:29.554 ************************************ 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:29.554 ************************************ 00:19:29.554 START TEST nvmf_auth_target 00:19:29.554 ************************************ 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:29.554 * Looking for test storage... 00:19:29.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:29.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.554 --rc genhtml_branch_coverage=1 00:19:29.554 --rc genhtml_function_coverage=1 00:19:29.554 --rc genhtml_legend=1 00:19:29.554 --rc geninfo_all_blocks=1 00:19:29.554 --rc geninfo_unexecuted_blocks=1 00:19:29.554 00:19:29.554 ' 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:29.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.554 --rc genhtml_branch_coverage=1 00:19:29.554 --rc genhtml_function_coverage=1 00:19:29.554 --rc genhtml_legend=1 00:19:29.554 --rc geninfo_all_blocks=1 00:19:29.554 --rc geninfo_unexecuted_blocks=1 00:19:29.554 00:19:29.554 ' 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:29.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.554 --rc genhtml_branch_coverage=1 00:19:29.554 --rc genhtml_function_coverage=1 00:19:29.554 --rc genhtml_legend=1 00:19:29.554 --rc geninfo_all_blocks=1 00:19:29.554 --rc geninfo_unexecuted_blocks=1 00:19:29.554 00:19:29.554 ' 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:29.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.554 --rc genhtml_branch_coverage=1 00:19:29.554 --rc genhtml_function_coverage=1 00:19:29.554 --rc genhtml_legend=1 00:19:29.554 --rc geninfo_all_blocks=1 00:19:29.554 --rc geninfo_unexecuted_blocks=1 00:19:29.554 00:19:29.554 ' 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.554 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:29.555 16:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.892 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.892 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:34.892 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:34.892 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:34.892 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:34.892 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:34.892 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:34.892 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:34.892 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:34.892 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:34.892 Found net devices under 0000:af:00.0: cvl_0_0 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:34.892 Found net devices under 0000:af:00.1: cvl_0_1 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:34.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:19:34.892 00:19:34.892 --- 10.0.0.2 ping statistics --- 00:19:34.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.892 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:34.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:19:34.892 00:19:34.892 --- 10.0.0.1 ping statistics --- 00:19:34.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.892 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:34.892 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=982732 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 982732 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982732 ']' 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=982912 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eee7dcb1df074d5a704b198c049b42105ed02b374197fda8 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8WQ 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eee7dcb1df074d5a704b198c049b42105ed02b374197fda8 0 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eee7dcb1df074d5a704b198c049b42105ed02b374197fda8 0 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eee7dcb1df074d5a704b198c049b42105ed02b374197fda8 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8WQ 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8WQ 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.8WQ 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fa51ba319acab3195e2a1e51ee9e0d69f184c240b65f2e66d0958cffef00c627 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hlf 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fa51ba319acab3195e2a1e51ee9e0d69f184c240b65f2e66d0958cffef00c627 3 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fa51ba319acab3195e2a1e51ee9e0d69f184c240b65f2e66d0958cffef00c627 3 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fa51ba319acab3195e2a1e51ee9e0d69f184c240b65f2e66d0958cffef00c627 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hlf 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hlf 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.hlf 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1dab1faf2bfe9329c2c6ba7b4583daf2 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Sbg 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1dab1faf2bfe9329c2c6ba7b4583daf2 1 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1dab1faf2bfe9329c2c6ba7b4583daf2 1 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1dab1faf2bfe9329c2c6ba7b4583daf2 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Sbg 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Sbg 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Sbg 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b8e4459a8b9a1283624b00dba907836e73a3dc1f7c82291f 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vpg 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b8e4459a8b9a1283624b00dba907836e73a3dc1f7c82291f 2 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b8e4459a8b9a1283624b00dba907836e73a3dc1f7c82291f 2 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b8e4459a8b9a1283624b00dba907836e73a3dc1f7c82291f 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vpg 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vpg 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.vpg 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b7efe5d435b1eccda354aa069bc2ecc70e50b0ae6df7e3c9 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5uO 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b7efe5d435b1eccda354aa069bc2ecc70e50b0ae6df7e3c9 2 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b7efe5d435b1eccda354aa069bc2ecc70e50b0ae6df7e3c9 2 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b7efe5d435b1eccda354aa069bc2ecc70e50b0ae6df7e3c9 00:19:34.893 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5uO 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5uO 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.5uO 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=76149ad38df75fa8c0a392d103814bac 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.o98 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 76149ad38df75fa8c0a392d103814bac 1 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 76149ad38df75fa8c0a392d103814bac 1 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=76149ad38df75fa8c0a392d103814bac 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.o98 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.o98 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.o98 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=786e6f36ff9a7180d414e71b210478c02a871a5b94ecbf27610b148a8d03b96e 00:19:34.894 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.d77 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 786e6f36ff9a7180d414e71b210478c02a871a5b94ecbf27610b148a8d03b96e 3 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 786e6f36ff9a7180d414e71b210478c02a871a5b94ecbf27610b148a8d03b96e 3 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=786e6f36ff9a7180d414e71b210478c02a871a5b94ecbf27610b148a8d03b96e 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.d77 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.d77 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.d77 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 982732 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982732 ']' 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.170 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.170 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.170 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:35.170 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 982912 /var/tmp/host.sock 00:19:35.170 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982912 ']' 00:19:35.170 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:35.170 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.171 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:35.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:35.171 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.171 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8WQ 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.8WQ 00:19:35.475 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.8WQ 00:19:35.734 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.hlf ]] 00:19:35.734 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hlf 00:19:35.734 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.734 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.734 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.734 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hlf 00:19:35.734 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hlf 00:19:35.993 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:35.993 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Sbg 00:19:35.993 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.993 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.993 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.993 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Sbg 00:19:35.993 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Sbg 00:19:35.993 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.vpg ]] 00:19:35.993 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vpg 00:19:35.993 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.993 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.993 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.993 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vpg 00:19:35.993 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vpg 00:19:36.252 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:36.252 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5uO 00:19:36.252 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.252 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.252 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.252 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5uO 00:19:36.252 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5uO 00:19:36.511 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.o98 ]] 00:19:36.511 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o98 00:19:36.511 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.511 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.511 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.511 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o98 00:19:36.511 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o98 00:19:36.769 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.d77 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.d77 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.d77 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:36.770 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.029 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.288 00:19:37.288 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.288 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.288 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.547 { 00:19:37.547 "cntlid": 1, 00:19:37.547 "qid": 0, 00:19:37.547 "state": "enabled", 00:19:37.547 "thread": "nvmf_tgt_poll_group_000", 00:19:37.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:37.547 "listen_address": { 00:19:37.547 "trtype": "TCP", 00:19:37.547 "adrfam": "IPv4", 00:19:37.547 "traddr": "10.0.0.2", 00:19:37.547 "trsvcid": "4420" 00:19:37.547 }, 00:19:37.547 "peer_address": { 00:19:37.547 "trtype": "TCP", 00:19:37.547 "adrfam": "IPv4", 00:19:37.547 "traddr": "10.0.0.1", 00:19:37.547 "trsvcid": "47226" 00:19:37.547 }, 00:19:37.547 "auth": { 00:19:37.547 "state": "completed", 00:19:37.547 "digest": "sha256", 00:19:37.547 "dhgroup": "null" 00:19:37.547 } 00:19:37.547 } 00:19:37.547 ]' 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.547 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.805 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:19:37.805 16:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:19:38.373 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.373 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:38.373 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.373 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.373 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.373 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.373 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.373 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.631 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:38.631 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.632 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.632 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:38.632 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:38.632 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.632 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.632 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.632 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.632 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.632 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.632 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.632 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.890 00:19:38.890 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.890 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.890 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.149 { 00:19:39.149 "cntlid": 3, 00:19:39.149 "qid": 0, 00:19:39.149 "state": "enabled", 00:19:39.149 "thread": "nvmf_tgt_poll_group_000", 00:19:39.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:39.149 "listen_address": { 00:19:39.149 "trtype": "TCP", 00:19:39.149 "adrfam": "IPv4", 00:19:39.149 "traddr": "10.0.0.2", 00:19:39.149 "trsvcid": "4420" 00:19:39.149 }, 00:19:39.149 "peer_address": { 00:19:39.149 "trtype": "TCP", 00:19:39.149 "adrfam": "IPv4", 00:19:39.149 "traddr": "10.0.0.1", 00:19:39.149 "trsvcid": "47250" 00:19:39.149 }, 00:19:39.149 "auth": { 00:19:39.149 "state": "completed", 00:19:39.149 "digest": "sha256", 00:19:39.149 "dhgroup": "null" 00:19:39.149 } 00:19:39.149 } 00:19:39.149 ]' 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.149 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.408 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:19:39.408 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:19:39.975 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.975 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:39.975 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.975 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.975 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.976 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.976 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.976 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.234 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.493 00:19:40.493 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.493 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.493 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.752 { 00:19:40.752 "cntlid": 5, 00:19:40.752 "qid": 0, 00:19:40.752 "state": "enabled", 00:19:40.752 "thread": "nvmf_tgt_poll_group_000", 00:19:40.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:40.752 "listen_address": { 00:19:40.752 "trtype": "TCP", 00:19:40.752 "adrfam": "IPv4", 00:19:40.752 "traddr": "10.0.0.2", 00:19:40.752 "trsvcid": "4420" 00:19:40.752 }, 00:19:40.752 "peer_address": { 00:19:40.752 "trtype": "TCP", 00:19:40.752 "adrfam": "IPv4", 00:19:40.752 "traddr": "10.0.0.1", 00:19:40.752 "trsvcid": "47270" 00:19:40.752 }, 00:19:40.752 "auth": { 00:19:40.752 "state": "completed", 00:19:40.752 "digest": "sha256", 00:19:40.752 "dhgroup": "null" 00:19:40.752 } 00:19:40.752 } 00:19:40.752 ]' 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.752 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.010 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:19:41.010 16:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:19:41.578 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.578 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:41.578 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.578 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.578 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.578 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.578 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.578 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.837 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.095 00:19:42.095 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.095 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.095 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.095 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.095 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.095 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.095 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.095 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.095 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.095 { 00:19:42.095 "cntlid": 7, 00:19:42.095 "qid": 0, 00:19:42.095 "state": "enabled", 00:19:42.095 "thread": "nvmf_tgt_poll_group_000", 00:19:42.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:42.095 "listen_address": { 00:19:42.095 "trtype": "TCP", 00:19:42.095 "adrfam": "IPv4", 00:19:42.095 "traddr": "10.0.0.2", 00:19:42.095 "trsvcid": "4420" 00:19:42.095 }, 00:19:42.095 "peer_address": { 00:19:42.095 "trtype": "TCP", 00:19:42.095 "adrfam": "IPv4", 00:19:42.095 "traddr": "10.0.0.1", 00:19:42.095 "trsvcid": "47306" 00:19:42.095 }, 00:19:42.095 "auth": { 00:19:42.095 "state": "completed", 00:19:42.095 "digest": "sha256", 00:19:42.095 "dhgroup": "null" 00:19:42.095 } 00:19:42.095 } 00:19:42.095 ]' 00:19:42.354 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.354 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.354 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.354 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:42.354 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.354 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.354 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.354 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.613 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:19:42.613 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:19:43.180 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.181 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.439 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.439 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.439 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.439 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.439 00:19:43.698 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.698 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.698 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.698 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.698 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.698 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.698 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.698 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.698 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.698 { 00:19:43.698 "cntlid": 9, 00:19:43.698 "qid": 0, 00:19:43.698 "state": "enabled", 00:19:43.698 "thread": "nvmf_tgt_poll_group_000", 00:19:43.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:43.698 "listen_address": { 00:19:43.698 "trtype": "TCP", 00:19:43.698 "adrfam": "IPv4", 00:19:43.698 "traddr": "10.0.0.2", 00:19:43.698 "trsvcid": "4420" 00:19:43.698 }, 00:19:43.698 "peer_address": { 00:19:43.698 "trtype": "TCP", 00:19:43.698 "adrfam": "IPv4", 00:19:43.698 "traddr": "10.0.0.1", 00:19:43.698 "trsvcid": "37334" 00:19:43.698 }, 00:19:43.698 "auth": { 00:19:43.698 "state": "completed", 00:19:43.698 "digest": "sha256", 00:19:43.698 "dhgroup": "ffdhe2048" 00:19:43.698 } 00:19:43.698 } 00:19:43.698 ]' 00:19:43.698 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.957 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.957 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.957 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.957 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.957 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.957 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.957 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.216 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:19:44.216 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.783 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.042 00:19:45.042 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.042 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.042 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.300 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.300 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.300 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.300 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.300 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.300 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.300 { 00:19:45.300 "cntlid": 11, 00:19:45.300 "qid": 0, 00:19:45.300 "state": "enabled", 00:19:45.300 "thread": "nvmf_tgt_poll_group_000", 00:19:45.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:45.300 "listen_address": { 00:19:45.300 "trtype": "TCP", 00:19:45.300 "adrfam": "IPv4", 00:19:45.300 "traddr": "10.0.0.2", 00:19:45.300 "trsvcid": "4420" 00:19:45.300 }, 00:19:45.300 "peer_address": { 00:19:45.300 "trtype": "TCP", 00:19:45.300 "adrfam": "IPv4", 00:19:45.300 "traddr": "10.0.0.1", 00:19:45.300 "trsvcid": "37376" 00:19:45.300 }, 00:19:45.300 "auth": { 00:19:45.300 "state": "completed", 00:19:45.300 "digest": "sha256", 00:19:45.300 "dhgroup": "ffdhe2048" 00:19:45.300 } 00:19:45.300 } 00:19:45.300 ]' 00:19:45.300 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.300 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.300 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.300 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.300 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.558 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.559 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.559 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.559 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:19:45.559 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:19:46.125 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.126 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:46.126 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.126 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.384 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.642 00:19:46.642 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.642 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.642 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.900 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.900 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.900 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.900 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.900 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.900 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.900 { 00:19:46.900 "cntlid": 13, 00:19:46.900 "qid": 0, 00:19:46.900 "state": "enabled", 00:19:46.900 "thread": "nvmf_tgt_poll_group_000", 00:19:46.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:46.900 "listen_address": { 00:19:46.900 "trtype": "TCP", 00:19:46.900 "adrfam": "IPv4", 00:19:46.900 "traddr": "10.0.0.2", 00:19:46.900 "trsvcid": "4420" 00:19:46.900 }, 00:19:46.900 "peer_address": { 00:19:46.900 "trtype": "TCP", 00:19:46.900 "adrfam": "IPv4", 00:19:46.900 "traddr": "10.0.0.1", 00:19:46.900 "trsvcid": "37406" 00:19:46.900 }, 00:19:46.900 "auth": { 00:19:46.900 "state": "completed", 00:19:46.900 "digest": "sha256", 00:19:46.900 "dhgroup": "ffdhe2048" 00:19:46.900 } 00:19:46.900 } 00:19:46.900 ]' 00:19:46.900 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.900 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.900 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.900 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.900 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.159 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.159 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.159 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.159 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:19:47.159 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:19:47.726 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.726 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:47.726 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.726 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.726 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.726 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.726 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.726 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.985 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:47.985 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.985 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.985 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:47.986 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.986 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.986 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:47.986 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.986 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.986 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.986 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.986 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.986 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.244 00:19:48.244 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.244 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.244 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.503 { 00:19:48.503 "cntlid": 15, 00:19:48.503 "qid": 0, 00:19:48.503 "state": "enabled", 00:19:48.503 "thread": "nvmf_tgt_poll_group_000", 00:19:48.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:48.503 "listen_address": { 00:19:48.503 "trtype": "TCP", 00:19:48.503 "adrfam": "IPv4", 00:19:48.503 "traddr": "10.0.0.2", 00:19:48.503 "trsvcid": "4420" 00:19:48.503 }, 00:19:48.503 "peer_address": { 00:19:48.503 "trtype": "TCP", 00:19:48.503 "adrfam": "IPv4", 00:19:48.503 "traddr": "10.0.0.1", 00:19:48.503 "trsvcid": "37434" 00:19:48.503 }, 00:19:48.503 "auth": { 00:19:48.503 "state": "completed", 00:19:48.503 "digest": "sha256", 00:19:48.503 "dhgroup": "ffdhe2048" 00:19:48.503 } 00:19:48.503 } 00:19:48.503 ]' 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.503 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.761 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:19:48.761 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:19:49.328 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.328 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:49.328 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.329 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.329 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.329 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.329 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.329 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.329 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.587 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.845 00:19:49.845 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.845 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.845 16:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.104 { 00:19:50.104 "cntlid": 17, 00:19:50.104 "qid": 0, 00:19:50.104 "state": "enabled", 00:19:50.104 "thread": "nvmf_tgt_poll_group_000", 00:19:50.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:50.104 "listen_address": { 00:19:50.104 "trtype": "TCP", 00:19:50.104 "adrfam": "IPv4", 00:19:50.104 "traddr": "10.0.0.2", 00:19:50.104 "trsvcid": "4420" 00:19:50.104 }, 00:19:50.104 "peer_address": { 00:19:50.104 "trtype": "TCP", 00:19:50.104 "adrfam": "IPv4", 00:19:50.104 "traddr": "10.0.0.1", 00:19:50.104 "trsvcid": "37464" 00:19:50.104 }, 00:19:50.104 "auth": { 00:19:50.104 "state": "completed", 00:19:50.104 "digest": "sha256", 00:19:50.104 "dhgroup": "ffdhe3072" 00:19:50.104 } 00:19:50.104 } 00:19:50.104 ]' 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.104 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.362 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:19:50.363 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:19:50.929 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.929 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:50.929 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.929 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.929 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.929 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.929 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.929 16:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.188 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.447 00:19:51.447 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.447 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.447 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.705 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.705 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.706 { 00:19:51.706 "cntlid": 19, 00:19:51.706 "qid": 0, 00:19:51.706 "state": "enabled", 00:19:51.706 "thread": "nvmf_tgt_poll_group_000", 00:19:51.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:51.706 "listen_address": { 00:19:51.706 "trtype": "TCP", 00:19:51.706 "adrfam": "IPv4", 00:19:51.706 "traddr": "10.0.0.2", 00:19:51.706 "trsvcid": "4420" 00:19:51.706 }, 00:19:51.706 "peer_address": { 00:19:51.706 "trtype": "TCP", 00:19:51.706 "adrfam": "IPv4", 00:19:51.706 "traddr": "10.0.0.1", 00:19:51.706 "trsvcid": "37490" 00:19:51.706 }, 00:19:51.706 "auth": { 00:19:51.706 "state": "completed", 00:19:51.706 "digest": "sha256", 00:19:51.706 "dhgroup": "ffdhe3072" 00:19:51.706 } 00:19:51.706 } 00:19:51.706 ]' 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.706 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.965 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:19:51.965 16:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:19:52.531 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.531 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:52.531 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.531 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.531 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.531 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.531 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.531 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.789 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.047 00:19:53.047 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.047 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.047 16:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.305 { 00:19:53.305 "cntlid": 21, 00:19:53.305 "qid": 0, 00:19:53.305 "state": "enabled", 00:19:53.305 "thread": "nvmf_tgt_poll_group_000", 00:19:53.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:53.305 "listen_address": { 00:19:53.305 "trtype": "TCP", 00:19:53.305 "adrfam": "IPv4", 00:19:53.305 "traddr": "10.0.0.2", 00:19:53.305 "trsvcid": "4420" 00:19:53.305 }, 00:19:53.305 "peer_address": { 00:19:53.305 "trtype": "TCP", 00:19:53.305 "adrfam": "IPv4", 00:19:53.305 "traddr": "10.0.0.1", 00:19:53.305 "trsvcid": "57762" 00:19:53.305 }, 00:19:53.305 "auth": { 00:19:53.305 "state": "completed", 00:19:53.305 "digest": "sha256", 00:19:53.305 "dhgroup": "ffdhe3072" 00:19:53.305 } 00:19:53.305 } 00:19:53.305 ]' 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.305 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.564 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:19:53.564 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:19:54.130 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.130 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:54.130 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.130 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.130 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.130 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.130 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.130 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.647 00:19:54.647 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.647 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.648 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.906 { 00:19:54.906 "cntlid": 23, 00:19:54.906 "qid": 0, 00:19:54.906 "state": "enabled", 00:19:54.906 "thread": "nvmf_tgt_poll_group_000", 00:19:54.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:54.906 "listen_address": { 00:19:54.906 "trtype": "TCP", 00:19:54.906 "adrfam": "IPv4", 00:19:54.906 "traddr": "10.0.0.2", 00:19:54.906 "trsvcid": "4420" 00:19:54.906 }, 00:19:54.906 "peer_address": { 00:19:54.906 "trtype": "TCP", 00:19:54.906 "adrfam": "IPv4", 00:19:54.906 "traddr": "10.0.0.1", 00:19:54.906 "trsvcid": "57788" 00:19:54.906 }, 00:19:54.906 "auth": { 00:19:54.906 "state": "completed", 00:19:54.906 "digest": "sha256", 00:19:54.906 "dhgroup": "ffdhe3072" 00:19:54.906 } 00:19:54.906 } 00:19:54.906 ]' 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.906 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.164 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:19:55.164 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:19:55.731 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.731 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:55.731 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.731 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.731 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.731 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.731 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.731 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.731 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.990 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.248 00:19:56.248 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.248 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.248 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.507 { 00:19:56.507 "cntlid": 25, 00:19:56.507 "qid": 0, 00:19:56.507 "state": "enabled", 00:19:56.507 "thread": "nvmf_tgt_poll_group_000", 00:19:56.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:56.507 "listen_address": { 00:19:56.507 "trtype": "TCP", 00:19:56.507 "adrfam": "IPv4", 00:19:56.507 "traddr": "10.0.0.2", 00:19:56.507 "trsvcid": "4420" 00:19:56.507 }, 00:19:56.507 "peer_address": { 00:19:56.507 "trtype": "TCP", 00:19:56.507 "adrfam": "IPv4", 00:19:56.507 "traddr": "10.0.0.1", 00:19:56.507 "trsvcid": "57824" 00:19:56.507 }, 00:19:56.507 "auth": { 00:19:56.507 "state": "completed", 00:19:56.507 "digest": "sha256", 00:19:56.507 "dhgroup": "ffdhe4096" 00:19:56.507 } 00:19:56.507 } 00:19:56.507 ]' 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.507 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.765 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:19:56.765 16:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:19:57.332 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.332 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:57.332 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.332 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.332 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.332 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.332 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.332 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.591 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.850 00:19:57.850 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.850 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.850 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.108 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.108 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.108 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.108 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.108 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.108 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.108 { 00:19:58.108 "cntlid": 27, 00:19:58.108 "qid": 0, 00:19:58.108 "state": "enabled", 00:19:58.108 "thread": "nvmf_tgt_poll_group_000", 00:19:58.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:58.108 "listen_address": { 00:19:58.108 "trtype": "TCP", 00:19:58.108 "adrfam": "IPv4", 00:19:58.108 "traddr": "10.0.0.2", 00:19:58.108 "trsvcid": "4420" 00:19:58.108 }, 00:19:58.108 "peer_address": { 00:19:58.108 "trtype": "TCP", 00:19:58.108 "adrfam": "IPv4", 00:19:58.108 "traddr": "10.0.0.1", 00:19:58.109 "trsvcid": "57854" 00:19:58.109 }, 00:19:58.109 "auth": { 00:19:58.109 "state": "completed", 00:19:58.109 "digest": "sha256", 00:19:58.109 "dhgroup": "ffdhe4096" 00:19:58.109 } 00:19:58.109 } 00:19:58.109 ]' 00:19:58.109 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.109 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.109 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.109 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:58.109 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.109 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.109 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.109 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.367 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:19:58.367 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:19:58.934 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.934 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:58.934 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.934 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.934 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.934 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.934 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.934 16:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.192 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.451 00:19:59.451 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.451 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.451 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.709 { 00:19:59.709 "cntlid": 29, 00:19:59.709 "qid": 0, 00:19:59.709 "state": "enabled", 00:19:59.709 "thread": "nvmf_tgt_poll_group_000", 00:19:59.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:59.709 "listen_address": { 00:19:59.709 "trtype": "TCP", 00:19:59.709 "adrfam": "IPv4", 00:19:59.709 "traddr": "10.0.0.2", 00:19:59.709 "trsvcid": "4420" 00:19:59.709 }, 00:19:59.709 "peer_address": { 00:19:59.709 "trtype": "TCP", 00:19:59.709 "adrfam": "IPv4", 00:19:59.709 "traddr": "10.0.0.1", 00:19:59.709 "trsvcid": "57870" 00:19:59.709 }, 00:19:59.709 "auth": { 00:19:59.709 "state": "completed", 00:19:59.709 "digest": "sha256", 00:19:59.709 "dhgroup": "ffdhe4096" 00:19:59.709 } 00:19:59.709 } 00:19:59.709 ]' 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.709 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.967 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:19:59.967 16:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:00.534 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.534 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:00.534 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.534 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.534 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.534 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.534 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.534 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.793 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:01.051 00:20:01.051 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.051 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.051 16:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.310 { 00:20:01.310 "cntlid": 31, 00:20:01.310 "qid": 0, 00:20:01.310 "state": "enabled", 00:20:01.310 "thread": "nvmf_tgt_poll_group_000", 00:20:01.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:01.310 "listen_address": { 00:20:01.310 "trtype": "TCP", 00:20:01.310 "adrfam": "IPv4", 00:20:01.310 "traddr": "10.0.0.2", 00:20:01.310 "trsvcid": "4420" 00:20:01.310 }, 00:20:01.310 "peer_address": { 00:20:01.310 "trtype": "TCP", 00:20:01.310 "adrfam": "IPv4", 00:20:01.310 "traddr": "10.0.0.1", 00:20:01.310 "trsvcid": "57904" 00:20:01.310 }, 00:20:01.310 "auth": { 00:20:01.310 "state": "completed", 00:20:01.310 "digest": "sha256", 00:20:01.310 "dhgroup": "ffdhe4096" 00:20:01.310 } 00:20:01.310 } 00:20:01.310 ]' 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.310 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.568 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:01.568 16:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:02.134 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.134 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:02.134 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.134 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.134 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.134 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.134 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.134 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.134 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.393 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.652 00:20:02.652 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.652 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.652 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.910 { 00:20:02.910 "cntlid": 33, 00:20:02.910 "qid": 0, 00:20:02.910 "state": "enabled", 00:20:02.910 "thread": "nvmf_tgt_poll_group_000", 00:20:02.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:02.910 "listen_address": { 00:20:02.910 "trtype": "TCP", 00:20:02.910 "adrfam": "IPv4", 00:20:02.910 "traddr": "10.0.0.2", 00:20:02.910 "trsvcid": "4420" 00:20:02.910 }, 00:20:02.910 "peer_address": { 00:20:02.910 "trtype": "TCP", 00:20:02.910 "adrfam": "IPv4", 00:20:02.910 "traddr": "10.0.0.1", 00:20:02.910 "trsvcid": "35236" 00:20:02.910 }, 00:20:02.910 "auth": { 00:20:02.910 "state": "completed", 00:20:02.910 "digest": "sha256", 00:20:02.910 "dhgroup": "ffdhe6144" 00:20:02.910 } 00:20:02.910 } 00:20:02.910 ]' 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.910 16:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.169 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:03.169 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:03.735 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.735 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.735 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.735 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.735 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.735 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.735 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.735 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.994 16:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.252 00:20:04.252 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.252 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.252 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.510 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.510 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.511 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.511 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.511 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.511 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.511 { 00:20:04.511 "cntlid": 35, 00:20:04.511 "qid": 0, 00:20:04.511 "state": "enabled", 00:20:04.511 "thread": "nvmf_tgt_poll_group_000", 00:20:04.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:04.511 "listen_address": { 00:20:04.511 "trtype": "TCP", 00:20:04.511 "adrfam": "IPv4", 00:20:04.511 "traddr": "10.0.0.2", 00:20:04.511 "trsvcid": "4420" 00:20:04.511 }, 00:20:04.511 "peer_address": { 00:20:04.511 "trtype": "TCP", 00:20:04.511 "adrfam": "IPv4", 00:20:04.511 "traddr": "10.0.0.1", 00:20:04.511 "trsvcid": "35264" 00:20:04.511 }, 00:20:04.511 "auth": { 00:20:04.511 "state": "completed", 00:20:04.511 "digest": "sha256", 00:20:04.511 "dhgroup": "ffdhe6144" 00:20:04.511 } 00:20:04.511 } 00:20:04.511 ]' 00:20:04.511 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.511 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.511 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.769 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.769 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.769 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.769 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.769 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.027 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:05.027 16:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.592 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.157 00:20:06.157 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.157 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.157 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.157 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.157 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.157 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.157 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.157 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.157 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.157 { 00:20:06.157 "cntlid": 37, 00:20:06.157 "qid": 0, 00:20:06.157 "state": "enabled", 00:20:06.157 "thread": "nvmf_tgt_poll_group_000", 00:20:06.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:06.157 "listen_address": { 00:20:06.157 "trtype": "TCP", 00:20:06.157 "adrfam": "IPv4", 00:20:06.157 "traddr": "10.0.0.2", 00:20:06.157 "trsvcid": "4420" 00:20:06.157 }, 00:20:06.157 "peer_address": { 00:20:06.157 "trtype": "TCP", 00:20:06.157 "adrfam": "IPv4", 00:20:06.157 "traddr": "10.0.0.1", 00:20:06.157 "trsvcid": "35282" 00:20:06.157 }, 00:20:06.157 "auth": { 00:20:06.157 "state": "completed", 00:20:06.157 "digest": "sha256", 00:20:06.157 "dhgroup": "ffdhe6144" 00:20:06.157 } 00:20:06.157 } 00:20:06.157 ]' 00:20:06.157 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.415 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.415 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.415 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.415 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.415 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.415 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.415 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.674 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:06.674 16:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.240 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.241 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.241 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.241 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.807 00:20:07.807 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.807 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.807 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.807 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.807 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.807 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.807 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.807 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.807 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.807 { 00:20:07.807 "cntlid": 39, 00:20:07.807 "qid": 0, 00:20:07.807 "state": "enabled", 00:20:07.807 "thread": "nvmf_tgt_poll_group_000", 00:20:07.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:07.807 "listen_address": { 00:20:07.807 "trtype": "TCP", 00:20:07.807 "adrfam": "IPv4", 00:20:07.807 "traddr": "10.0.0.2", 00:20:07.807 "trsvcid": "4420" 00:20:07.807 }, 00:20:07.807 "peer_address": { 00:20:07.807 "trtype": "TCP", 00:20:07.807 "adrfam": "IPv4", 00:20:07.807 "traddr": "10.0.0.1", 00:20:07.807 "trsvcid": "35306" 00:20:07.807 }, 00:20:07.807 "auth": { 00:20:07.807 "state": "completed", 00:20:07.807 "digest": "sha256", 00:20:07.807 "dhgroup": "ffdhe6144" 00:20:07.807 } 00:20:07.807 } 00:20:07.807 ]' 00:20:07.807 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.807 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.066 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.066 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:08.066 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.066 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.066 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.066 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.324 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:08.324 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.891 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.457 00:20:09.457 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.457 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.457 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.715 { 00:20:09.715 "cntlid": 41, 00:20:09.715 "qid": 0, 00:20:09.715 "state": "enabled", 00:20:09.715 "thread": "nvmf_tgt_poll_group_000", 00:20:09.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:09.715 "listen_address": { 00:20:09.715 "trtype": "TCP", 00:20:09.715 "adrfam": "IPv4", 00:20:09.715 "traddr": "10.0.0.2", 00:20:09.715 "trsvcid": "4420" 00:20:09.715 }, 00:20:09.715 "peer_address": { 00:20:09.715 "trtype": "TCP", 00:20:09.715 "adrfam": "IPv4", 00:20:09.715 "traddr": "10.0.0.1", 00:20:09.715 "trsvcid": "35322" 00:20:09.715 }, 00:20:09.715 "auth": { 00:20:09.715 "state": "completed", 00:20:09.715 "digest": "sha256", 00:20:09.715 "dhgroup": "ffdhe8192" 00:20:09.715 } 00:20:09.715 } 00:20:09.715 ]' 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.715 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.973 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:09.973 16:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:10.540 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.540 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:10.540 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.540 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.540 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.540 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.540 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.540 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.798 16:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.364 00:20:11.364 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.364 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.364 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.364 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.364 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.364 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.364 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.622 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.622 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.622 { 00:20:11.622 "cntlid": 43, 00:20:11.622 "qid": 0, 00:20:11.622 "state": "enabled", 00:20:11.622 "thread": "nvmf_tgt_poll_group_000", 00:20:11.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:11.622 "listen_address": { 00:20:11.622 "trtype": "TCP", 00:20:11.622 "adrfam": "IPv4", 00:20:11.622 "traddr": "10.0.0.2", 00:20:11.622 "trsvcid": "4420" 00:20:11.622 }, 00:20:11.622 "peer_address": { 00:20:11.622 "trtype": "TCP", 00:20:11.622 "adrfam": "IPv4", 00:20:11.623 "traddr": "10.0.0.1", 00:20:11.623 "trsvcid": "35360" 00:20:11.623 }, 00:20:11.623 "auth": { 00:20:11.623 "state": "completed", 00:20:11.623 "digest": "sha256", 00:20:11.623 "dhgroup": "ffdhe8192" 00:20:11.623 } 00:20:11.623 } 00:20:11.623 ]' 00:20:11.623 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.623 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.623 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.623 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.623 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.623 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.623 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.623 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.881 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:11.881 16:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:12.484 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.484 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:12.484 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.484 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.484 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.484 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.484 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.484 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.782 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:12.782 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.782 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.782 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:12.782 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:12.782 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.782 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.782 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.782 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.782 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.782 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.783 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.783 16:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.084 00:20:13.084 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.084 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.084 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.343 { 00:20:13.343 "cntlid": 45, 00:20:13.343 "qid": 0, 00:20:13.343 "state": "enabled", 00:20:13.343 "thread": "nvmf_tgt_poll_group_000", 00:20:13.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:13.343 "listen_address": { 00:20:13.343 "trtype": "TCP", 00:20:13.343 "adrfam": "IPv4", 00:20:13.343 "traddr": "10.0.0.2", 00:20:13.343 "trsvcid": "4420" 00:20:13.343 }, 00:20:13.343 "peer_address": { 00:20:13.343 "trtype": "TCP", 00:20:13.343 "adrfam": "IPv4", 00:20:13.343 "traddr": "10.0.0.1", 00:20:13.343 "trsvcid": "36756" 00:20:13.343 }, 00:20:13.343 "auth": { 00:20:13.343 "state": "completed", 00:20:13.343 "digest": "sha256", 00:20:13.343 "dhgroup": "ffdhe8192" 00:20:13.343 } 00:20:13.343 } 00:20:13.343 ]' 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.343 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.602 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:13.602 16:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:14.168 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.168 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:14.168 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.168 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.168 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.168 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.168 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.168 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.426 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:14.426 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.427 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.427 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:14.427 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:14.427 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.427 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:14.427 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.427 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.427 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.427 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.427 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.427 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.994 00:20:14.994 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.994 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.994 16:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.994 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.994 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.994 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.994 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.994 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.994 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.994 { 00:20:14.994 "cntlid": 47, 00:20:14.994 "qid": 0, 00:20:14.994 "state": "enabled", 00:20:14.994 "thread": "nvmf_tgt_poll_group_000", 00:20:14.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:14.994 "listen_address": { 00:20:14.994 "trtype": "TCP", 00:20:14.994 "adrfam": "IPv4", 00:20:14.994 "traddr": "10.0.0.2", 00:20:14.994 "trsvcid": "4420" 00:20:14.994 }, 00:20:14.994 "peer_address": { 00:20:14.994 "trtype": "TCP", 00:20:14.994 "adrfam": "IPv4", 00:20:14.994 "traddr": "10.0.0.1", 00:20:14.994 "trsvcid": "36772" 00:20:14.994 }, 00:20:14.994 "auth": { 00:20:14.994 "state": "completed", 00:20:14.994 "digest": "sha256", 00:20:14.994 "dhgroup": "ffdhe8192" 00:20:14.994 } 00:20:14.994 } 00:20:14.994 ]' 00:20:14.994 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.994 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.994 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.251 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:15.251 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.251 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.251 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.251 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.509 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:15.509 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:16.076 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.076 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.076 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.076 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.076 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.076 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:16.076 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.076 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.076 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.076 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.076 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:16.076 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.076 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.076 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.076 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.076 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.076 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.076 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.077 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.077 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.077 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.077 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.077 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.335 00:20:16.335 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.335 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.335 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.593 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.593 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.593 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.593 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.593 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.593 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.593 { 00:20:16.593 "cntlid": 49, 00:20:16.593 "qid": 0, 00:20:16.593 "state": "enabled", 00:20:16.593 "thread": "nvmf_tgt_poll_group_000", 00:20:16.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.593 "listen_address": { 00:20:16.593 "trtype": "TCP", 00:20:16.593 "adrfam": "IPv4", 00:20:16.593 "traddr": "10.0.0.2", 00:20:16.593 "trsvcid": "4420" 00:20:16.593 }, 00:20:16.593 "peer_address": { 00:20:16.593 "trtype": "TCP", 00:20:16.593 "adrfam": "IPv4", 00:20:16.593 "traddr": "10.0.0.1", 00:20:16.593 "trsvcid": "36808" 00:20:16.593 }, 00:20:16.593 "auth": { 00:20:16.593 "state": "completed", 00:20:16.593 "digest": "sha384", 00:20:16.593 "dhgroup": "null" 00:20:16.593 } 00:20:16.593 } 00:20:16.593 ]' 00:20:16.593 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.593 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.593 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.593 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:16.593 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.851 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.851 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.851 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.851 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:16.852 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:17.418 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.418 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.418 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.418 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.418 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.418 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.418 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.418 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.677 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.935 00:20:17.935 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.935 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.935 16:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.194 { 00:20:18.194 "cntlid": 51, 00:20:18.194 "qid": 0, 00:20:18.194 "state": "enabled", 00:20:18.194 "thread": "nvmf_tgt_poll_group_000", 00:20:18.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.194 "listen_address": { 00:20:18.194 "trtype": "TCP", 00:20:18.194 "adrfam": "IPv4", 00:20:18.194 "traddr": "10.0.0.2", 00:20:18.194 "trsvcid": "4420" 00:20:18.194 }, 00:20:18.194 "peer_address": { 00:20:18.194 "trtype": "TCP", 00:20:18.194 "adrfam": "IPv4", 00:20:18.194 "traddr": "10.0.0.1", 00:20:18.194 "trsvcid": "36820" 00:20:18.194 }, 00:20:18.194 "auth": { 00:20:18.194 "state": "completed", 00:20:18.194 "digest": "sha384", 00:20:18.194 "dhgroup": "null" 00:20:18.194 } 00:20:18.194 } 00:20:18.194 ]' 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.194 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.452 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:18.452 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:19.019 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.020 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:19.020 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.020 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.020 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.020 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.020 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.020 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.279 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.537 00:20:19.537 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.537 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.537 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.795 { 00:20:19.795 "cntlid": 53, 00:20:19.795 "qid": 0, 00:20:19.795 "state": "enabled", 00:20:19.795 "thread": "nvmf_tgt_poll_group_000", 00:20:19.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:19.795 "listen_address": { 00:20:19.795 "trtype": "TCP", 00:20:19.795 "adrfam": "IPv4", 00:20:19.795 "traddr": "10.0.0.2", 00:20:19.795 "trsvcid": "4420" 00:20:19.795 }, 00:20:19.795 "peer_address": { 00:20:19.795 "trtype": "TCP", 00:20:19.795 "adrfam": "IPv4", 00:20:19.795 "traddr": "10.0.0.1", 00:20:19.795 "trsvcid": "36838" 00:20:19.795 }, 00:20:19.795 "auth": { 00:20:19.795 "state": "completed", 00:20:19.795 "digest": "sha384", 00:20:19.795 "dhgroup": "null" 00:20:19.795 } 00:20:19.795 } 00:20:19.795 ]' 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.795 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.053 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:20.054 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:20.620 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.620 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.620 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.620 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.620 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.620 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.620 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.620 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.878 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.136 00:20:21.136 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.136 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.136 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.394 { 00:20:21.394 "cntlid": 55, 00:20:21.394 "qid": 0, 00:20:21.394 "state": "enabled", 00:20:21.394 "thread": "nvmf_tgt_poll_group_000", 00:20:21.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.394 "listen_address": { 00:20:21.394 "trtype": "TCP", 00:20:21.394 "adrfam": "IPv4", 00:20:21.394 "traddr": "10.0.0.2", 00:20:21.394 "trsvcid": "4420" 00:20:21.394 }, 00:20:21.394 "peer_address": { 00:20:21.394 "trtype": "TCP", 00:20:21.394 "adrfam": "IPv4", 00:20:21.394 "traddr": "10.0.0.1", 00:20:21.394 "trsvcid": "36860" 00:20:21.394 }, 00:20:21.394 "auth": { 00:20:21.394 "state": "completed", 00:20:21.394 "digest": "sha384", 00:20:21.394 "dhgroup": "null" 00:20:21.394 } 00:20:21.394 } 00:20:21.394 ]' 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.394 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.652 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:21.652 16:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:22.218 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.218 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.218 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.218 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.218 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.218 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.218 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.218 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.218 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.476 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:22.476 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.476 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.476 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:22.476 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.476 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.476 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.476 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.476 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.476 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.476 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.477 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.477 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.735 00:20:22.735 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.735 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.735 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.994 { 00:20:22.994 "cntlid": 57, 00:20:22.994 "qid": 0, 00:20:22.994 "state": "enabled", 00:20:22.994 "thread": "nvmf_tgt_poll_group_000", 00:20:22.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:22.994 "listen_address": { 00:20:22.994 "trtype": "TCP", 00:20:22.994 "adrfam": "IPv4", 00:20:22.994 "traddr": "10.0.0.2", 00:20:22.994 "trsvcid": "4420" 00:20:22.994 }, 00:20:22.994 "peer_address": { 00:20:22.994 "trtype": "TCP", 00:20:22.994 "adrfam": "IPv4", 00:20:22.994 "traddr": "10.0.0.1", 00:20:22.994 "trsvcid": "57558" 00:20:22.994 }, 00:20:22.994 "auth": { 00:20:22.994 "state": "completed", 00:20:22.994 "digest": "sha384", 00:20:22.994 "dhgroup": "ffdhe2048" 00:20:22.994 } 00:20:22.994 } 00:20:22.994 ]' 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.994 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.252 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:23.252 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:23.818 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.818 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:23.818 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.818 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.818 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.818 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.818 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.818 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.076 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:24.076 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.077 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.077 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:24.077 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.077 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.077 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.077 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.077 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.077 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.077 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.077 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.077 16:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.335 00:20:24.335 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.335 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.335 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.593 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.593 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.593 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.593 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.593 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.593 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.593 { 00:20:24.593 "cntlid": 59, 00:20:24.593 "qid": 0, 00:20:24.594 "state": "enabled", 00:20:24.594 "thread": "nvmf_tgt_poll_group_000", 00:20:24.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.594 "listen_address": { 00:20:24.594 "trtype": "TCP", 00:20:24.594 "adrfam": "IPv4", 00:20:24.594 "traddr": "10.0.0.2", 00:20:24.594 "trsvcid": "4420" 00:20:24.594 }, 00:20:24.594 "peer_address": { 00:20:24.594 "trtype": "TCP", 00:20:24.594 "adrfam": "IPv4", 00:20:24.594 "traddr": "10.0.0.1", 00:20:24.594 "trsvcid": "57586" 00:20:24.594 }, 00:20:24.594 "auth": { 00:20:24.594 "state": "completed", 00:20:24.594 "digest": "sha384", 00:20:24.594 "dhgroup": "ffdhe2048" 00:20:24.594 } 00:20:24.594 } 00:20:24.594 ]' 00:20:24.594 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.594 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.594 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.594 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.594 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.594 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.594 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.594 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.852 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:24.852 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:25.422 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.422 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.422 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.422 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.422 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.422 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.422 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.422 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.680 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.938 00:20:25.938 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.938 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.938 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.938 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.196 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.196 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.196 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.196 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.196 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.196 { 00:20:26.196 "cntlid": 61, 00:20:26.196 "qid": 0, 00:20:26.196 "state": "enabled", 00:20:26.196 "thread": "nvmf_tgt_poll_group_000", 00:20:26.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:26.196 "listen_address": { 00:20:26.196 "trtype": "TCP", 00:20:26.196 "adrfam": "IPv4", 00:20:26.196 "traddr": "10.0.0.2", 00:20:26.196 "trsvcid": "4420" 00:20:26.196 }, 00:20:26.196 "peer_address": { 00:20:26.196 "trtype": "TCP", 00:20:26.196 "adrfam": "IPv4", 00:20:26.196 "traddr": "10.0.0.1", 00:20:26.196 "trsvcid": "57616" 00:20:26.196 }, 00:20:26.196 "auth": { 00:20:26.196 "state": "completed", 00:20:26.196 "digest": "sha384", 00:20:26.197 "dhgroup": "ffdhe2048" 00:20:26.197 } 00:20:26.197 } 00:20:26.197 ]' 00:20:26.197 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.197 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.197 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.197 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.197 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.197 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.197 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.197 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.455 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:26.455 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:27.021 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.021 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:27.021 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.021 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.021 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.021 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.021 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.021 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.280 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.538 00:20:27.538 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.538 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.538 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.538 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.538 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.538 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.538 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.538 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.538 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.538 { 00:20:27.538 "cntlid": 63, 00:20:27.538 "qid": 0, 00:20:27.538 "state": "enabled", 00:20:27.538 "thread": "nvmf_tgt_poll_group_000", 00:20:27.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.538 "listen_address": { 00:20:27.538 "trtype": "TCP", 00:20:27.538 "adrfam": "IPv4", 00:20:27.538 "traddr": "10.0.0.2", 00:20:27.538 "trsvcid": "4420" 00:20:27.538 }, 00:20:27.538 "peer_address": { 00:20:27.538 "trtype": "TCP", 00:20:27.538 "adrfam": "IPv4", 00:20:27.538 "traddr": "10.0.0.1", 00:20:27.538 "trsvcid": "57648" 00:20:27.538 }, 00:20:27.538 "auth": { 00:20:27.538 "state": "completed", 00:20:27.538 "digest": "sha384", 00:20:27.538 "dhgroup": "ffdhe2048" 00:20:27.538 } 00:20:27.538 } 00:20:27.538 ]' 00:20:27.538 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.796 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.797 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.797 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.797 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.797 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.797 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.797 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.055 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:28.055 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.621 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.879 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.879 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.879 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.879 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.879 00:20:29.137 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.137 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.137 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.137 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.137 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.137 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.137 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.137 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.137 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.137 { 00:20:29.137 "cntlid": 65, 00:20:29.137 "qid": 0, 00:20:29.137 "state": "enabled", 00:20:29.137 "thread": "nvmf_tgt_poll_group_000", 00:20:29.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.137 "listen_address": { 00:20:29.137 "trtype": "TCP", 00:20:29.137 "adrfam": "IPv4", 00:20:29.137 "traddr": "10.0.0.2", 00:20:29.137 "trsvcid": "4420" 00:20:29.137 }, 00:20:29.137 "peer_address": { 00:20:29.137 "trtype": "TCP", 00:20:29.137 "adrfam": "IPv4", 00:20:29.137 "traddr": "10.0.0.1", 00:20:29.137 "trsvcid": "57684" 00:20:29.137 }, 00:20:29.137 "auth": { 00:20:29.137 "state": "completed", 00:20:29.137 "digest": "sha384", 00:20:29.137 "dhgroup": "ffdhe3072" 00:20:29.137 } 00:20:29.137 } 00:20:29.137 ]' 00:20:29.138 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.396 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.396 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.396 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.396 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.396 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.396 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.396 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.654 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:29.654 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:30.220 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.220 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.220 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.220 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.221 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.479 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.479 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.479 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.479 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.479 00:20:30.737 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.737 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.737 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.737 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.737 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.737 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.737 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.737 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.737 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.737 { 00:20:30.737 "cntlid": 67, 00:20:30.737 "qid": 0, 00:20:30.737 "state": "enabled", 00:20:30.737 "thread": "nvmf_tgt_poll_group_000", 00:20:30.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:30.737 "listen_address": { 00:20:30.737 "trtype": "TCP", 00:20:30.737 "adrfam": "IPv4", 00:20:30.737 "traddr": "10.0.0.2", 00:20:30.737 "trsvcid": "4420" 00:20:30.737 }, 00:20:30.737 "peer_address": { 00:20:30.737 "trtype": "TCP", 00:20:30.737 "adrfam": "IPv4", 00:20:30.737 "traddr": "10.0.0.1", 00:20:30.738 "trsvcid": "57704" 00:20:30.738 }, 00:20:30.738 "auth": { 00:20:30.738 "state": "completed", 00:20:30.738 "digest": "sha384", 00:20:30.738 "dhgroup": "ffdhe3072" 00:20:30.738 } 00:20:30.738 } 00:20:30.738 ]' 00:20:30.738 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.996 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.996 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.996 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.996 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.996 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.996 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.996 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.254 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:31.254 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:31.819 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.819 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:31.819 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.819 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.819 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.819 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.819 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.819 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.819 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.077 16:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.335 00:20:32.335 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.335 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.335 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.335 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.335 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.335 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.335 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.335 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.335 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.335 { 00:20:32.335 "cntlid": 69, 00:20:32.335 "qid": 0, 00:20:32.335 "state": "enabled", 00:20:32.335 "thread": "nvmf_tgt_poll_group_000", 00:20:32.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.335 "listen_address": { 00:20:32.335 "trtype": "TCP", 00:20:32.335 "adrfam": "IPv4", 00:20:32.335 "traddr": "10.0.0.2", 00:20:32.335 "trsvcid": "4420" 00:20:32.335 }, 00:20:32.335 "peer_address": { 00:20:32.335 "trtype": "TCP", 00:20:32.335 "adrfam": "IPv4", 00:20:32.335 "traddr": "10.0.0.1", 00:20:32.335 "trsvcid": "57730" 00:20:32.335 }, 00:20:32.335 "auth": { 00:20:32.335 "state": "completed", 00:20:32.335 "digest": "sha384", 00:20:32.335 "dhgroup": "ffdhe3072" 00:20:32.335 } 00:20:32.335 } 00:20:32.335 ]' 00:20:32.335 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.594 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.594 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.594 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.594 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.594 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.594 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.594 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.852 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:32.852 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:33.418 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.419 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.677 00:20:33.677 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.677 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.677 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.935 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.935 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.935 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.935 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.935 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.935 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.935 { 00:20:33.935 "cntlid": 71, 00:20:33.935 "qid": 0, 00:20:33.935 "state": "enabled", 00:20:33.935 "thread": "nvmf_tgt_poll_group_000", 00:20:33.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:33.935 "listen_address": { 00:20:33.935 "trtype": "TCP", 00:20:33.935 "adrfam": "IPv4", 00:20:33.935 "traddr": "10.0.0.2", 00:20:33.935 "trsvcid": "4420" 00:20:33.935 }, 00:20:33.935 "peer_address": { 00:20:33.935 "trtype": "TCP", 00:20:33.935 "adrfam": "IPv4", 00:20:33.935 "traddr": "10.0.0.1", 00:20:33.935 "trsvcid": "40822" 00:20:33.935 }, 00:20:33.935 "auth": { 00:20:33.935 "state": "completed", 00:20:33.935 "digest": "sha384", 00:20:33.935 "dhgroup": "ffdhe3072" 00:20:33.935 } 00:20:33.935 } 00:20:33.935 ]' 00:20:33.935 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.935 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.935 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.193 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.193 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.193 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.193 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.194 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.452 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:34.452 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:35.018 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.018 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.018 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.018 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.018 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.018 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.018 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.018 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.018 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.018 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.276 00:20:35.276 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.276 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.276 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.533 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.533 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.534 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.534 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.534 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.534 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.534 { 00:20:35.534 "cntlid": 73, 00:20:35.534 "qid": 0, 00:20:35.534 "state": "enabled", 00:20:35.534 "thread": "nvmf_tgt_poll_group_000", 00:20:35.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:35.534 "listen_address": { 00:20:35.534 "trtype": "TCP", 00:20:35.534 "adrfam": "IPv4", 00:20:35.534 "traddr": "10.0.0.2", 00:20:35.534 "trsvcid": "4420" 00:20:35.534 }, 00:20:35.534 "peer_address": { 00:20:35.534 "trtype": "TCP", 00:20:35.534 "adrfam": "IPv4", 00:20:35.534 "traddr": "10.0.0.1", 00:20:35.534 "trsvcid": "40862" 00:20:35.534 }, 00:20:35.534 "auth": { 00:20:35.534 "state": "completed", 00:20:35.534 "digest": "sha384", 00:20:35.534 "dhgroup": "ffdhe4096" 00:20:35.534 } 00:20:35.534 } 00:20:35.534 ]' 00:20:35.534 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.534 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.534 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.792 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.792 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.792 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.792 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.792 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.792 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:35.792 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:36.359 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.359 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.359 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.359 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.359 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.359 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.359 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.359 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.617 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.876 00:20:36.876 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.876 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.876 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.134 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.134 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.134 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.134 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.134 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.134 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.134 { 00:20:37.134 "cntlid": 75, 00:20:37.134 "qid": 0, 00:20:37.134 "state": "enabled", 00:20:37.134 "thread": "nvmf_tgt_poll_group_000", 00:20:37.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.134 "listen_address": { 00:20:37.134 "trtype": "TCP", 00:20:37.134 "adrfam": "IPv4", 00:20:37.134 "traddr": "10.0.0.2", 00:20:37.134 "trsvcid": "4420" 00:20:37.134 }, 00:20:37.134 "peer_address": { 00:20:37.134 "trtype": "TCP", 00:20:37.134 "adrfam": "IPv4", 00:20:37.134 "traddr": "10.0.0.1", 00:20:37.134 "trsvcid": "40882" 00:20:37.134 }, 00:20:37.134 "auth": { 00:20:37.134 "state": "completed", 00:20:37.134 "digest": "sha384", 00:20:37.134 "dhgroup": "ffdhe4096" 00:20:37.134 } 00:20:37.134 } 00:20:37.134 ]' 00:20:37.134 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.134 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.134 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.134 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.134 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.392 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.392 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.392 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.392 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:37.393 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:37.962 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.962 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.962 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.962 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.962 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.962 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.962 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.962 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.220 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.479 00:20:38.479 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.479 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.479 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.737 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.737 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.737 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.737 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.737 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.737 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.737 { 00:20:38.737 "cntlid": 77, 00:20:38.737 "qid": 0, 00:20:38.737 "state": "enabled", 00:20:38.737 "thread": "nvmf_tgt_poll_group_000", 00:20:38.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:38.737 "listen_address": { 00:20:38.737 "trtype": "TCP", 00:20:38.737 "adrfam": "IPv4", 00:20:38.737 "traddr": "10.0.0.2", 00:20:38.737 "trsvcid": "4420" 00:20:38.737 }, 00:20:38.737 "peer_address": { 00:20:38.737 "trtype": "TCP", 00:20:38.737 "adrfam": "IPv4", 00:20:38.737 "traddr": "10.0.0.1", 00:20:38.737 "trsvcid": "40892" 00:20:38.737 }, 00:20:38.737 "auth": { 00:20:38.737 "state": "completed", 00:20:38.737 "digest": "sha384", 00:20:38.737 "dhgroup": "ffdhe4096" 00:20:38.737 } 00:20:38.737 } 00:20:38.737 ]' 00:20:38.737 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.737 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.737 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.737 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.737 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.994 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.994 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.994 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.994 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:38.994 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:39.560 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.561 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:39.561 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.561 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.561 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.561 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.561 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.561 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.819 16:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.077 00:20:40.077 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.077 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.077 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.335 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.336 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.336 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.336 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.336 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.336 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.336 { 00:20:40.336 "cntlid": 79, 00:20:40.336 "qid": 0, 00:20:40.336 "state": "enabled", 00:20:40.336 "thread": "nvmf_tgt_poll_group_000", 00:20:40.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:40.336 "listen_address": { 00:20:40.336 "trtype": "TCP", 00:20:40.336 "adrfam": "IPv4", 00:20:40.336 "traddr": "10.0.0.2", 00:20:40.336 "trsvcid": "4420" 00:20:40.336 }, 00:20:40.336 "peer_address": { 00:20:40.336 "trtype": "TCP", 00:20:40.336 "adrfam": "IPv4", 00:20:40.336 "traddr": "10.0.0.1", 00:20:40.336 "trsvcid": "40902" 00:20:40.336 }, 00:20:40.336 "auth": { 00:20:40.336 "state": "completed", 00:20:40.336 "digest": "sha384", 00:20:40.336 "dhgroup": "ffdhe4096" 00:20:40.336 } 00:20:40.336 } 00:20:40.336 ]' 00:20:40.336 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.336 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.336 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.336 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.336 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.594 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.594 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.594 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.594 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:40.594 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:41.160 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.161 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:41.161 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.161 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.161 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.161 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.161 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.161 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.161 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.419 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.985 00:20:41.985 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.985 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.985 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.985 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.985 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.985 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.985 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.985 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.985 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.985 { 00:20:41.985 "cntlid": 81, 00:20:41.985 "qid": 0, 00:20:41.985 "state": "enabled", 00:20:41.985 "thread": "nvmf_tgt_poll_group_000", 00:20:41.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.985 "listen_address": { 00:20:41.985 "trtype": "TCP", 00:20:41.985 "adrfam": "IPv4", 00:20:41.985 "traddr": "10.0.0.2", 00:20:41.985 "trsvcid": "4420" 00:20:41.985 }, 00:20:41.985 "peer_address": { 00:20:41.985 "trtype": "TCP", 00:20:41.985 "adrfam": "IPv4", 00:20:41.985 "traddr": "10.0.0.1", 00:20:41.985 "trsvcid": "40936" 00:20:41.985 }, 00:20:41.985 "auth": { 00:20:41.985 "state": "completed", 00:20:41.985 "digest": "sha384", 00:20:41.985 "dhgroup": "ffdhe6144" 00:20:41.985 } 00:20:41.985 } 00:20:41.985 ]' 00:20:41.985 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.985 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.985 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.243 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.243 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.243 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.243 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.243 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.243 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:42.244 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:42.810 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.810 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.810 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.810 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.810 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.810 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.810 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:42.810 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.068 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:43.068 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.068 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.068 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.068 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:43.068 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.068 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.068 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.068 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.068 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.068 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.069 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.069 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.635 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.635 { 00:20:43.635 "cntlid": 83, 00:20:43.635 "qid": 0, 00:20:43.635 "state": "enabled", 00:20:43.635 "thread": "nvmf_tgt_poll_group_000", 00:20:43.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:43.635 "listen_address": { 00:20:43.635 "trtype": "TCP", 00:20:43.635 "adrfam": "IPv4", 00:20:43.635 "traddr": "10.0.0.2", 00:20:43.635 "trsvcid": "4420" 00:20:43.635 }, 00:20:43.635 "peer_address": { 00:20:43.635 "trtype": "TCP", 00:20:43.635 "adrfam": "IPv4", 00:20:43.635 "traddr": "10.0.0.1", 00:20:43.635 "trsvcid": "39604" 00:20:43.635 }, 00:20:43.635 "auth": { 00:20:43.635 "state": "completed", 00:20:43.635 "digest": "sha384", 00:20:43.635 "dhgroup": "ffdhe6144" 00:20:43.635 } 00:20:43.635 } 00:20:43.635 ]' 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.635 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.893 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:43.893 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.893 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.893 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.893 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.151 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:44.151 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:44.719 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.719 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:44.719 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.719 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.719 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.719 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.719 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.719 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.977 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.234 00:20:45.234 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.234 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.234 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.492 { 00:20:45.492 "cntlid": 85, 00:20:45.492 "qid": 0, 00:20:45.492 "state": "enabled", 00:20:45.492 "thread": "nvmf_tgt_poll_group_000", 00:20:45.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:45.492 "listen_address": { 00:20:45.492 "trtype": "TCP", 00:20:45.492 "adrfam": "IPv4", 00:20:45.492 "traddr": "10.0.0.2", 00:20:45.492 "trsvcid": "4420" 00:20:45.492 }, 00:20:45.492 "peer_address": { 00:20:45.492 "trtype": "TCP", 00:20:45.492 "adrfam": "IPv4", 00:20:45.492 "traddr": "10.0.0.1", 00:20:45.492 "trsvcid": "39642" 00:20:45.492 }, 00:20:45.492 "auth": { 00:20:45.492 "state": "completed", 00:20:45.492 "digest": "sha384", 00:20:45.492 "dhgroup": "ffdhe6144" 00:20:45.492 } 00:20:45.492 } 00:20:45.492 ]' 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.492 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.750 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:45.750 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:46.316 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.316 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.316 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.316 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.316 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.316 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.316 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.316 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.574 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.833 00:20:46.833 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.833 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.833 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.091 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.091 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.091 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.091 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.091 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.091 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.091 { 00:20:47.091 "cntlid": 87, 00:20:47.091 "qid": 0, 00:20:47.091 "state": "enabled", 00:20:47.091 "thread": "nvmf_tgt_poll_group_000", 00:20:47.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.091 "listen_address": { 00:20:47.091 "trtype": "TCP", 00:20:47.091 "adrfam": "IPv4", 00:20:47.091 "traddr": "10.0.0.2", 00:20:47.091 "trsvcid": "4420" 00:20:47.091 }, 00:20:47.091 "peer_address": { 00:20:47.091 "trtype": "TCP", 00:20:47.091 "adrfam": "IPv4", 00:20:47.091 "traddr": "10.0.0.1", 00:20:47.091 "trsvcid": "39668" 00:20:47.091 }, 00:20:47.091 "auth": { 00:20:47.091 "state": "completed", 00:20:47.091 "digest": "sha384", 00:20:47.091 "dhgroup": "ffdhe6144" 00:20:47.091 } 00:20:47.091 } 00:20:47.091 ]' 00:20:47.091 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.091 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.091 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.091 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.091 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.350 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.350 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.350 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.350 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:47.350 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:47.916 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.916 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:47.916 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.916 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.916 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.916 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.916 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.916 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.916 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.174 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:48.174 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.174 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.174 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.174 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.174 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.175 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.175 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.175 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.175 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.175 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.175 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.175 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.741 00:20:48.741 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.741 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.741 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.000 { 00:20:49.000 "cntlid": 89, 00:20:49.000 "qid": 0, 00:20:49.000 "state": "enabled", 00:20:49.000 "thread": "nvmf_tgt_poll_group_000", 00:20:49.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.000 "listen_address": { 00:20:49.000 "trtype": "TCP", 00:20:49.000 "adrfam": "IPv4", 00:20:49.000 "traddr": "10.0.0.2", 00:20:49.000 "trsvcid": "4420" 00:20:49.000 }, 00:20:49.000 "peer_address": { 00:20:49.000 "trtype": "TCP", 00:20:49.000 "adrfam": "IPv4", 00:20:49.000 "traddr": "10.0.0.1", 00:20:49.000 "trsvcid": "39712" 00:20:49.000 }, 00:20:49.000 "auth": { 00:20:49.000 "state": "completed", 00:20:49.000 "digest": "sha384", 00:20:49.000 "dhgroup": "ffdhe8192" 00:20:49.000 } 00:20:49.000 } 00:20:49.000 ]' 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.000 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.258 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:49.258 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:49.825 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.825 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:49.825 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.825 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.825 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.825 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.825 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.825 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.141 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.458 00:20:50.458 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.458 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.458 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.763 { 00:20:50.763 "cntlid": 91, 00:20:50.763 "qid": 0, 00:20:50.763 "state": "enabled", 00:20:50.763 "thread": "nvmf_tgt_poll_group_000", 00:20:50.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:50.763 "listen_address": { 00:20:50.763 "trtype": "TCP", 00:20:50.763 "adrfam": "IPv4", 00:20:50.763 "traddr": "10.0.0.2", 00:20:50.763 "trsvcid": "4420" 00:20:50.763 }, 00:20:50.763 "peer_address": { 00:20:50.763 "trtype": "TCP", 00:20:50.763 "adrfam": "IPv4", 00:20:50.763 "traddr": "10.0.0.1", 00:20:50.763 "trsvcid": "39734" 00:20:50.763 }, 00:20:50.763 "auth": { 00:20:50.763 "state": "completed", 00:20:50.763 "digest": "sha384", 00:20:50.763 "dhgroup": "ffdhe8192" 00:20:50.763 } 00:20:50.763 } 00:20:50.763 ]' 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.763 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.021 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:51.021 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:51.586 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.586 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:51.586 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.586 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.586 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.587 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.587 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.587 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.845 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.411 00:20:52.411 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.411 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.411 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.411 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.411 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.411 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.411 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.669 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.669 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.669 { 00:20:52.669 "cntlid": 93, 00:20:52.669 "qid": 0, 00:20:52.669 "state": "enabled", 00:20:52.669 "thread": "nvmf_tgt_poll_group_000", 00:20:52.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.669 "listen_address": { 00:20:52.669 "trtype": "TCP", 00:20:52.669 "adrfam": "IPv4", 00:20:52.669 "traddr": "10.0.0.2", 00:20:52.669 "trsvcid": "4420" 00:20:52.669 }, 00:20:52.669 "peer_address": { 00:20:52.669 "trtype": "TCP", 00:20:52.669 "adrfam": "IPv4", 00:20:52.669 "traddr": "10.0.0.1", 00:20:52.669 "trsvcid": "39754" 00:20:52.669 }, 00:20:52.669 "auth": { 00:20:52.669 "state": "completed", 00:20:52.669 "digest": "sha384", 00:20:52.669 "dhgroup": "ffdhe8192" 00:20:52.669 } 00:20:52.669 } 00:20:52.669 ]' 00:20:52.669 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.669 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.669 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.669 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.669 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.669 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.669 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.669 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.926 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:52.926 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:53.491 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.491 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.491 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.491 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.491 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.491 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.491 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.491 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.749 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.007 00:20:54.007 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.007 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.007 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.265 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.265 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.265 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.265 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.265 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.265 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.265 { 00:20:54.265 "cntlid": 95, 00:20:54.265 "qid": 0, 00:20:54.265 "state": "enabled", 00:20:54.265 "thread": "nvmf_tgt_poll_group_000", 00:20:54.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:54.265 "listen_address": { 00:20:54.265 "trtype": "TCP", 00:20:54.265 "adrfam": "IPv4", 00:20:54.265 "traddr": "10.0.0.2", 00:20:54.265 "trsvcid": "4420" 00:20:54.265 }, 00:20:54.265 "peer_address": { 00:20:54.265 "trtype": "TCP", 00:20:54.265 "adrfam": "IPv4", 00:20:54.265 "traddr": "10.0.0.1", 00:20:54.265 "trsvcid": "50376" 00:20:54.265 }, 00:20:54.265 "auth": { 00:20:54.265 "state": "completed", 00:20:54.265 "digest": "sha384", 00:20:54.265 "dhgroup": "ffdhe8192" 00:20:54.265 } 00:20:54.265 } 00:20:54.265 ]' 00:20:54.265 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.265 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.265 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.523 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.523 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.523 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.523 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.523 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.782 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:54.782 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.348 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.606 00:20:55.606 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.606 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.606 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.864 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.864 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.864 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.864 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.864 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.864 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.864 { 00:20:55.864 "cntlid": 97, 00:20:55.864 "qid": 0, 00:20:55.864 "state": "enabled", 00:20:55.864 "thread": "nvmf_tgt_poll_group_000", 00:20:55.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:55.864 "listen_address": { 00:20:55.864 "trtype": "TCP", 00:20:55.864 "adrfam": "IPv4", 00:20:55.864 "traddr": "10.0.0.2", 00:20:55.864 "trsvcid": "4420" 00:20:55.864 }, 00:20:55.864 "peer_address": { 00:20:55.864 "trtype": "TCP", 00:20:55.864 "adrfam": "IPv4", 00:20:55.864 "traddr": "10.0.0.1", 00:20:55.864 "trsvcid": "50400" 00:20:55.864 }, 00:20:55.864 "auth": { 00:20:55.864 "state": "completed", 00:20:55.864 "digest": "sha512", 00:20:55.864 "dhgroup": "null" 00:20:55.864 } 00:20:55.864 } 00:20:55.864 ]' 00:20:55.864 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.864 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.864 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.864 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:55.864 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.122 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.122 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.122 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.122 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:56.122 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:20:56.688 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.688 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.688 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.688 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.688 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.688 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.688 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:56.688 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.946 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.947 16:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.205 00:20:57.205 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.205 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.205 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.463 { 00:20:57.463 "cntlid": 99, 00:20:57.463 "qid": 0, 00:20:57.463 "state": "enabled", 00:20:57.463 "thread": "nvmf_tgt_poll_group_000", 00:20:57.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:57.463 "listen_address": { 00:20:57.463 "trtype": "TCP", 00:20:57.463 "adrfam": "IPv4", 00:20:57.463 "traddr": "10.0.0.2", 00:20:57.463 "trsvcid": "4420" 00:20:57.463 }, 00:20:57.463 "peer_address": { 00:20:57.463 "trtype": "TCP", 00:20:57.463 "adrfam": "IPv4", 00:20:57.463 "traddr": "10.0.0.1", 00:20:57.463 "trsvcid": "50428" 00:20:57.463 }, 00:20:57.463 "auth": { 00:20:57.463 "state": "completed", 00:20:57.463 "digest": "sha512", 00:20:57.463 "dhgroup": "null" 00:20:57.463 } 00:20:57.463 } 00:20:57.463 ]' 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.463 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.721 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:57.721 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:20:58.288 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.288 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.288 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.288 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.288 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.288 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.288 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.288 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.546 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.805 00:20:58.805 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.805 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.805 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.064 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.064 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.064 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.064 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.064 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.064 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.064 { 00:20:59.064 "cntlid": 101, 00:20:59.064 "qid": 0, 00:20:59.064 "state": "enabled", 00:20:59.064 "thread": "nvmf_tgt_poll_group_000", 00:20:59.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.064 "listen_address": { 00:20:59.064 "trtype": "TCP", 00:20:59.064 "adrfam": "IPv4", 00:20:59.064 "traddr": "10.0.0.2", 00:20:59.064 "trsvcid": "4420" 00:20:59.064 }, 00:20:59.064 "peer_address": { 00:20:59.064 "trtype": "TCP", 00:20:59.064 "adrfam": "IPv4", 00:20:59.064 "traddr": "10.0.0.1", 00:20:59.064 "trsvcid": "50462" 00:20:59.064 }, 00:20:59.064 "auth": { 00:20:59.064 "state": "completed", 00:20:59.064 "digest": "sha512", 00:20:59.064 "dhgroup": "null" 00:20:59.064 } 00:20:59.064 } 00:20:59.064 ]' 00:20:59.064 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.064 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.064 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.064 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.064 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.064 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.064 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.064 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.322 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:59.322 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:20:59.888 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.888 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.888 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.888 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.888 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.888 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.888 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.888 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.146 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.405 00:21:00.405 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.405 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.405 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.664 { 00:21:00.664 "cntlid": 103, 00:21:00.664 "qid": 0, 00:21:00.664 "state": "enabled", 00:21:00.664 "thread": "nvmf_tgt_poll_group_000", 00:21:00.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.664 "listen_address": { 00:21:00.664 "trtype": "TCP", 00:21:00.664 "adrfam": "IPv4", 00:21:00.664 "traddr": "10.0.0.2", 00:21:00.664 "trsvcid": "4420" 00:21:00.664 }, 00:21:00.664 "peer_address": { 00:21:00.664 "trtype": "TCP", 00:21:00.664 "adrfam": "IPv4", 00:21:00.664 "traddr": "10.0.0.1", 00:21:00.664 "trsvcid": "50490" 00:21:00.664 }, 00:21:00.664 "auth": { 00:21:00.664 "state": "completed", 00:21:00.664 "digest": "sha512", 00:21:00.664 "dhgroup": "null" 00:21:00.664 } 00:21:00.664 } 00:21:00.664 ]' 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.664 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.922 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:00.922 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:01.489 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.489 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.489 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.489 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.489 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.489 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.489 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.489 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.489 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.747 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.005 00:21:02.005 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.005 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.005 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.264 { 00:21:02.264 "cntlid": 105, 00:21:02.264 "qid": 0, 00:21:02.264 "state": "enabled", 00:21:02.264 "thread": "nvmf_tgt_poll_group_000", 00:21:02.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:02.264 "listen_address": { 00:21:02.264 "trtype": "TCP", 00:21:02.264 "adrfam": "IPv4", 00:21:02.264 "traddr": "10.0.0.2", 00:21:02.264 "trsvcid": "4420" 00:21:02.264 }, 00:21:02.264 "peer_address": { 00:21:02.264 "trtype": "TCP", 00:21:02.264 "adrfam": "IPv4", 00:21:02.264 "traddr": "10.0.0.1", 00:21:02.264 "trsvcid": "50514" 00:21:02.264 }, 00:21:02.264 "auth": { 00:21:02.264 "state": "completed", 00:21:02.264 "digest": "sha512", 00:21:02.264 "dhgroup": "ffdhe2048" 00:21:02.264 } 00:21:02.264 } 00:21:02.264 ]' 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.264 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.523 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:02.523 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:03.090 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.090 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.090 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.090 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.090 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.090 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.090 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.090 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.348 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.607 00:21:03.607 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.607 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.607 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.865 { 00:21:03.865 "cntlid": 107, 00:21:03.865 "qid": 0, 00:21:03.865 "state": "enabled", 00:21:03.865 "thread": "nvmf_tgt_poll_group_000", 00:21:03.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.865 "listen_address": { 00:21:03.865 "trtype": "TCP", 00:21:03.865 "adrfam": "IPv4", 00:21:03.865 "traddr": "10.0.0.2", 00:21:03.865 "trsvcid": "4420" 00:21:03.865 }, 00:21:03.865 "peer_address": { 00:21:03.865 "trtype": "TCP", 00:21:03.865 "adrfam": "IPv4", 00:21:03.865 "traddr": "10.0.0.1", 00:21:03.865 "trsvcid": "45430" 00:21:03.865 }, 00:21:03.865 "auth": { 00:21:03.865 "state": "completed", 00:21:03.865 "digest": "sha512", 00:21:03.865 "dhgroup": "ffdhe2048" 00:21:03.865 } 00:21:03.865 } 00:21:03.865 ]' 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.865 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.123 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:21:04.124 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:21:04.690 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.690 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:04.690 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.690 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.690 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.690 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.690 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.690 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.948 16:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.213 00:21:05.213 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.213 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.213 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.213 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.213 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.213 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.213 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.213 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.213 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.213 { 00:21:05.213 "cntlid": 109, 00:21:05.213 "qid": 0, 00:21:05.213 "state": "enabled", 00:21:05.213 "thread": "nvmf_tgt_poll_group_000", 00:21:05.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.213 "listen_address": { 00:21:05.213 "trtype": "TCP", 00:21:05.213 "adrfam": "IPv4", 00:21:05.213 "traddr": "10.0.0.2", 00:21:05.213 "trsvcid": "4420" 00:21:05.213 }, 00:21:05.213 "peer_address": { 00:21:05.213 "trtype": "TCP", 00:21:05.213 "adrfam": "IPv4", 00:21:05.213 "traddr": "10.0.0.1", 00:21:05.213 "trsvcid": "45466" 00:21:05.213 }, 00:21:05.213 "auth": { 00:21:05.213 "state": "completed", 00:21:05.213 "digest": "sha512", 00:21:05.213 "dhgroup": "ffdhe2048" 00:21:05.213 } 00:21:05.213 } 00:21:05.213 ]' 00:21:05.213 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.472 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.472 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.472 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.472 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.472 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.473 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.473 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.731 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:21:05.731 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:21:06.297 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.297 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.297 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.297 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.297 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.297 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.297 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.297 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.297 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.298 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.556 00:21:06.556 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.556 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.556 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.814 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.814 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.814 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.814 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.814 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.814 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.814 { 00:21:06.814 "cntlid": 111, 00:21:06.814 "qid": 0, 00:21:06.814 "state": "enabled", 00:21:06.814 "thread": "nvmf_tgt_poll_group_000", 00:21:06.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:06.814 "listen_address": { 00:21:06.814 "trtype": "TCP", 00:21:06.814 "adrfam": "IPv4", 00:21:06.814 "traddr": "10.0.0.2", 00:21:06.814 "trsvcid": "4420" 00:21:06.814 }, 00:21:06.814 "peer_address": { 00:21:06.814 "trtype": "TCP", 00:21:06.814 "adrfam": "IPv4", 00:21:06.814 "traddr": "10.0.0.1", 00:21:06.814 "trsvcid": "45494" 00:21:06.814 }, 00:21:06.814 "auth": { 00:21:06.814 "state": "completed", 00:21:06.814 "digest": "sha512", 00:21:06.814 "dhgroup": "ffdhe2048" 00:21:06.814 } 00:21:06.814 } 00:21:06.814 ]' 00:21:06.814 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.814 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.814 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.072 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.072 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.072 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.072 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.072 16:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.072 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:07.072 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:07.639 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.639 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.639 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.639 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.639 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.639 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.639 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.639 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.639 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.898 16:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.156 00:21:08.156 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.156 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.156 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.415 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.415 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.415 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.415 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.415 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.415 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.415 { 00:21:08.415 "cntlid": 113, 00:21:08.415 "qid": 0, 00:21:08.415 "state": "enabled", 00:21:08.415 "thread": "nvmf_tgt_poll_group_000", 00:21:08.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:08.415 "listen_address": { 00:21:08.415 "trtype": "TCP", 00:21:08.415 "adrfam": "IPv4", 00:21:08.415 "traddr": "10.0.0.2", 00:21:08.415 "trsvcid": "4420" 00:21:08.415 }, 00:21:08.415 "peer_address": { 00:21:08.415 "trtype": "TCP", 00:21:08.415 "adrfam": "IPv4", 00:21:08.415 "traddr": "10.0.0.1", 00:21:08.415 "trsvcid": "45526" 00:21:08.415 }, 00:21:08.415 "auth": { 00:21:08.415 "state": "completed", 00:21:08.415 "digest": "sha512", 00:21:08.415 "dhgroup": "ffdhe3072" 00:21:08.415 } 00:21:08.415 } 00:21:08.415 ]' 00:21:08.415 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.415 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.415 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.415 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.415 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.673 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.673 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.673 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.673 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:08.673 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:09.240 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.240 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.240 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.240 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.240 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.240 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.240 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.240 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.498 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.757 00:21:09.757 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.757 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.757 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.015 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.015 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.015 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.015 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.015 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.015 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.015 { 00:21:10.015 "cntlid": 115, 00:21:10.015 "qid": 0, 00:21:10.015 "state": "enabled", 00:21:10.015 "thread": "nvmf_tgt_poll_group_000", 00:21:10.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.015 "listen_address": { 00:21:10.015 "trtype": "TCP", 00:21:10.015 "adrfam": "IPv4", 00:21:10.015 "traddr": "10.0.0.2", 00:21:10.015 "trsvcid": "4420" 00:21:10.015 }, 00:21:10.015 "peer_address": { 00:21:10.015 "trtype": "TCP", 00:21:10.015 "adrfam": "IPv4", 00:21:10.015 "traddr": "10.0.0.1", 00:21:10.015 "trsvcid": "45558" 00:21:10.015 }, 00:21:10.015 "auth": { 00:21:10.015 "state": "completed", 00:21:10.015 "digest": "sha512", 00:21:10.015 "dhgroup": "ffdhe3072" 00:21:10.015 } 00:21:10.015 } 00:21:10.015 ]' 00:21:10.015 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.015 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.015 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.015 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.015 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.015 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.015 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.015 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.274 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:21:10.274 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:21:10.840 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.840 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:10.840 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.840 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.840 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.840 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.840 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.840 16:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.099 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.357 00:21:11.357 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.357 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.357 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.615 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.616 { 00:21:11.616 "cntlid": 117, 00:21:11.616 "qid": 0, 00:21:11.616 "state": "enabled", 00:21:11.616 "thread": "nvmf_tgt_poll_group_000", 00:21:11.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.616 "listen_address": { 00:21:11.616 "trtype": "TCP", 00:21:11.616 "adrfam": "IPv4", 00:21:11.616 "traddr": "10.0.0.2", 00:21:11.616 "trsvcid": "4420" 00:21:11.616 }, 00:21:11.616 "peer_address": { 00:21:11.616 "trtype": "TCP", 00:21:11.616 "adrfam": "IPv4", 00:21:11.616 "traddr": "10.0.0.1", 00:21:11.616 "trsvcid": "45588" 00:21:11.616 }, 00:21:11.616 "auth": { 00:21:11.616 "state": "completed", 00:21:11.616 "digest": "sha512", 00:21:11.616 "dhgroup": "ffdhe3072" 00:21:11.616 } 00:21:11.616 } 00:21:11.616 ]' 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.616 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.874 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:21:11.874 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:21:12.441 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.441 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.441 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.441 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.441 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.441 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.441 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.441 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.700 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.958 00:21:12.958 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.958 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.958 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.216 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.216 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.216 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.216 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.216 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.217 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.217 { 00:21:13.217 "cntlid": 119, 00:21:13.217 "qid": 0, 00:21:13.217 "state": "enabled", 00:21:13.217 "thread": "nvmf_tgt_poll_group_000", 00:21:13.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.217 "listen_address": { 00:21:13.217 "trtype": "TCP", 00:21:13.217 "adrfam": "IPv4", 00:21:13.217 "traddr": "10.0.0.2", 00:21:13.217 "trsvcid": "4420" 00:21:13.217 }, 00:21:13.217 "peer_address": { 00:21:13.217 "trtype": "TCP", 00:21:13.217 "adrfam": "IPv4", 00:21:13.217 "traddr": "10.0.0.1", 00:21:13.217 "trsvcid": "49272" 00:21:13.217 }, 00:21:13.217 "auth": { 00:21:13.217 "state": "completed", 00:21:13.217 "digest": "sha512", 00:21:13.217 "dhgroup": "ffdhe3072" 00:21:13.217 } 00:21:13.217 } 00:21:13.217 ]' 00:21:13.217 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.217 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.217 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.217 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.217 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.217 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.217 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.217 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.474 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:13.474 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:14.040 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.040 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.040 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.040 16:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.040 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.040 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.040 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.040 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.040 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.299 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.557 00:21:14.557 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.557 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.557 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.816 { 00:21:14.816 "cntlid": 121, 00:21:14.816 "qid": 0, 00:21:14.816 "state": "enabled", 00:21:14.816 "thread": "nvmf_tgt_poll_group_000", 00:21:14.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:14.816 "listen_address": { 00:21:14.816 "trtype": "TCP", 00:21:14.816 "adrfam": "IPv4", 00:21:14.816 "traddr": "10.0.0.2", 00:21:14.816 "trsvcid": "4420" 00:21:14.816 }, 00:21:14.816 "peer_address": { 00:21:14.816 "trtype": "TCP", 00:21:14.816 "adrfam": "IPv4", 00:21:14.816 "traddr": "10.0.0.1", 00:21:14.816 "trsvcid": "49304" 00:21:14.816 }, 00:21:14.816 "auth": { 00:21:14.816 "state": "completed", 00:21:14.816 "digest": "sha512", 00:21:14.816 "dhgroup": "ffdhe4096" 00:21:14.816 } 00:21:14.816 } 00:21:14.816 ]' 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.816 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.074 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:15.074 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:15.640 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.640 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:15.640 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.640 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.640 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.640 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.640 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.640 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.898 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:15.898 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.898 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.898 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:15.898 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.898 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.898 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.898 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.898 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.898 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.898 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.899 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.899 16:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.157 00:21:16.157 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.157 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.157 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.157 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.157 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.157 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.157 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.415 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.415 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.415 { 00:21:16.415 "cntlid": 123, 00:21:16.415 "qid": 0, 00:21:16.415 "state": "enabled", 00:21:16.415 "thread": "nvmf_tgt_poll_group_000", 00:21:16.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.415 "listen_address": { 00:21:16.415 "trtype": "TCP", 00:21:16.415 "adrfam": "IPv4", 00:21:16.415 "traddr": "10.0.0.2", 00:21:16.415 "trsvcid": "4420" 00:21:16.415 }, 00:21:16.415 "peer_address": { 00:21:16.415 "trtype": "TCP", 00:21:16.415 "adrfam": "IPv4", 00:21:16.415 "traddr": "10.0.0.1", 00:21:16.415 "trsvcid": "49312" 00:21:16.415 }, 00:21:16.415 "auth": { 00:21:16.415 "state": "completed", 00:21:16.415 "digest": "sha512", 00:21:16.415 "dhgroup": "ffdhe4096" 00:21:16.415 } 00:21:16.415 } 00:21:16.415 ]' 00:21:16.415 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.415 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.415 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.415 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.415 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.415 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.415 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.415 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.674 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:21:16.674 16:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:21:17.241 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.241 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.241 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.241 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.241 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.241 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.241 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.241 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.499 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.758 00:21:17.758 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.758 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.758 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.758 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.758 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.758 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.758 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.758 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.758 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.758 { 00:21:17.758 "cntlid": 125, 00:21:17.758 "qid": 0, 00:21:17.758 "state": "enabled", 00:21:17.758 "thread": "nvmf_tgt_poll_group_000", 00:21:17.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:17.758 "listen_address": { 00:21:17.758 "trtype": "TCP", 00:21:17.758 "adrfam": "IPv4", 00:21:17.758 "traddr": "10.0.0.2", 00:21:17.758 "trsvcid": "4420" 00:21:17.758 }, 00:21:17.758 "peer_address": { 00:21:17.758 "trtype": "TCP", 00:21:17.758 "adrfam": "IPv4", 00:21:17.758 "traddr": "10.0.0.1", 00:21:17.758 "trsvcid": "49330" 00:21:17.758 }, 00:21:17.758 "auth": { 00:21:17.758 "state": "completed", 00:21:17.758 "digest": "sha512", 00:21:17.758 "dhgroup": "ffdhe4096" 00:21:17.758 } 00:21:17.758 } 00:21:17.758 ]' 00:21:17.758 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.016 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.016 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.016 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.016 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.016 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.016 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.016 16:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.274 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:21:18.275 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.841 16:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.099 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.358 { 00:21:19.358 "cntlid": 127, 00:21:19.358 "qid": 0, 00:21:19.358 "state": "enabled", 00:21:19.358 "thread": "nvmf_tgt_poll_group_000", 00:21:19.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.358 "listen_address": { 00:21:19.358 "trtype": "TCP", 00:21:19.358 "adrfam": "IPv4", 00:21:19.358 "traddr": "10.0.0.2", 00:21:19.358 "trsvcid": "4420" 00:21:19.358 }, 00:21:19.358 "peer_address": { 00:21:19.358 "trtype": "TCP", 00:21:19.358 "adrfam": "IPv4", 00:21:19.358 "traddr": "10.0.0.1", 00:21:19.358 "trsvcid": "49356" 00:21:19.358 }, 00:21:19.358 "auth": { 00:21:19.358 "state": "completed", 00:21:19.358 "digest": "sha512", 00:21:19.358 "dhgroup": "ffdhe4096" 00:21:19.358 } 00:21:19.358 } 00:21:19.358 ]' 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.358 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.616 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.616 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.616 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.616 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.616 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.874 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:19.874 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.442 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.008 00:21:21.008 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.008 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.008 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.008 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.008 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.008 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.008 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.008 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.008 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.008 { 00:21:21.008 "cntlid": 129, 00:21:21.008 "qid": 0, 00:21:21.008 "state": "enabled", 00:21:21.008 "thread": "nvmf_tgt_poll_group_000", 00:21:21.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.008 "listen_address": { 00:21:21.008 "trtype": "TCP", 00:21:21.008 "adrfam": "IPv4", 00:21:21.008 "traddr": "10.0.0.2", 00:21:21.008 "trsvcid": "4420" 00:21:21.008 }, 00:21:21.008 "peer_address": { 00:21:21.008 "trtype": "TCP", 00:21:21.008 "adrfam": "IPv4", 00:21:21.008 "traddr": "10.0.0.1", 00:21:21.008 "trsvcid": "49370" 00:21:21.008 }, 00:21:21.008 "auth": { 00:21:21.008 "state": "completed", 00:21:21.008 "digest": "sha512", 00:21:21.008 "dhgroup": "ffdhe6144" 00:21:21.008 } 00:21:21.008 } 00:21:21.008 ]' 00:21:21.008 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.266 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.266 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.266 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.266 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.266 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.266 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.266 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.524 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:21.524 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:22.089 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.089 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.089 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.089 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.090 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.090 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.090 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.090 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.090 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:22.090 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.090 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.090 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:22.090 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.090 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.090 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.090 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.090 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.347 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.348 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.348 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.348 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.606 00:21:22.606 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.606 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.606 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.865 { 00:21:22.865 "cntlid": 131, 00:21:22.865 "qid": 0, 00:21:22.865 "state": "enabled", 00:21:22.865 "thread": "nvmf_tgt_poll_group_000", 00:21:22.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:22.865 "listen_address": { 00:21:22.865 "trtype": "TCP", 00:21:22.865 "adrfam": "IPv4", 00:21:22.865 "traddr": "10.0.0.2", 00:21:22.865 "trsvcid": "4420" 00:21:22.865 }, 00:21:22.865 "peer_address": { 00:21:22.865 "trtype": "TCP", 00:21:22.865 "adrfam": "IPv4", 00:21:22.865 "traddr": "10.0.0.1", 00:21:22.865 "trsvcid": "42142" 00:21:22.865 }, 00:21:22.865 "auth": { 00:21:22.865 "state": "completed", 00:21:22.865 "digest": "sha512", 00:21:22.865 "dhgroup": "ffdhe6144" 00:21:22.865 } 00:21:22.865 } 00:21:22.865 ]' 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.865 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.123 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:21:23.123 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:21:23.691 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.691 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.691 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.691 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.691 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.691 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.691 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.691 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.950 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.951 16:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.209 00:21:24.209 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.209 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.209 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.467 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.467 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.467 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.467 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.467 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.467 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.467 { 00:21:24.467 "cntlid": 133, 00:21:24.467 "qid": 0, 00:21:24.467 "state": "enabled", 00:21:24.467 "thread": "nvmf_tgt_poll_group_000", 00:21:24.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.467 "listen_address": { 00:21:24.467 "trtype": "TCP", 00:21:24.467 "adrfam": "IPv4", 00:21:24.467 "traddr": "10.0.0.2", 00:21:24.468 "trsvcid": "4420" 00:21:24.468 }, 00:21:24.468 "peer_address": { 00:21:24.468 "trtype": "TCP", 00:21:24.468 "adrfam": "IPv4", 00:21:24.468 "traddr": "10.0.0.1", 00:21:24.468 "trsvcid": "42170" 00:21:24.468 }, 00:21:24.468 "auth": { 00:21:24.468 "state": "completed", 00:21:24.468 "digest": "sha512", 00:21:24.468 "dhgroup": "ffdhe6144" 00:21:24.468 } 00:21:24.468 } 00:21:24.468 ]' 00:21:24.468 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.468 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.468 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.468 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.468 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.468 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.468 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.468 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.726 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:21:24.727 16:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:21:25.292 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.292 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.292 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.292 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.292 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.292 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.292 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.292 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.550 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.808 00:21:25.808 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.808 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.808 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.066 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.066 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.066 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.066 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.066 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.066 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.066 { 00:21:26.066 "cntlid": 135, 00:21:26.066 "qid": 0, 00:21:26.066 "state": "enabled", 00:21:26.066 "thread": "nvmf_tgt_poll_group_000", 00:21:26.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:26.066 "listen_address": { 00:21:26.066 "trtype": "TCP", 00:21:26.066 "adrfam": "IPv4", 00:21:26.066 "traddr": "10.0.0.2", 00:21:26.066 "trsvcid": "4420" 00:21:26.066 }, 00:21:26.066 "peer_address": { 00:21:26.066 "trtype": "TCP", 00:21:26.067 "adrfam": "IPv4", 00:21:26.067 "traddr": "10.0.0.1", 00:21:26.067 "trsvcid": "42202" 00:21:26.067 }, 00:21:26.067 "auth": { 00:21:26.067 "state": "completed", 00:21:26.067 "digest": "sha512", 00:21:26.067 "dhgroup": "ffdhe6144" 00:21:26.067 } 00:21:26.067 } 00:21:26.067 ]' 00:21:26.067 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.067 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.067 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.067 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.067 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.325 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.325 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.325 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.325 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:26.325 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:26.891 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.891 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:26.891 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.891 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.891 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.891 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.891 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.891 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:26.891 16:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.150 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.830 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.830 { 00:21:27.830 "cntlid": 137, 00:21:27.830 "qid": 0, 00:21:27.830 "state": "enabled", 00:21:27.830 "thread": "nvmf_tgt_poll_group_000", 00:21:27.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.830 "listen_address": { 00:21:27.830 "trtype": "TCP", 00:21:27.830 "adrfam": "IPv4", 00:21:27.830 "traddr": "10.0.0.2", 00:21:27.830 "trsvcid": "4420" 00:21:27.830 }, 00:21:27.830 "peer_address": { 00:21:27.830 "trtype": "TCP", 00:21:27.830 "adrfam": "IPv4", 00:21:27.830 "traddr": "10.0.0.1", 00:21:27.830 "trsvcid": "42228" 00:21:27.830 }, 00:21:27.830 "auth": { 00:21:27.830 "state": "completed", 00:21:27.830 "digest": "sha512", 00:21:27.830 "dhgroup": "ffdhe8192" 00:21:27.830 } 00:21:27.830 } 00:21:27.830 ]' 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.830 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.104 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.104 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.104 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.104 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:28.104 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:28.671 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.671 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.671 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.671 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.671 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.671 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.671 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.671 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.930 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.497 00:21:29.497 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.497 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.497 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.755 { 00:21:29.755 "cntlid": 139, 00:21:29.755 "qid": 0, 00:21:29.755 "state": "enabled", 00:21:29.755 "thread": "nvmf_tgt_poll_group_000", 00:21:29.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:29.755 "listen_address": { 00:21:29.755 "trtype": "TCP", 00:21:29.755 "adrfam": "IPv4", 00:21:29.755 "traddr": "10.0.0.2", 00:21:29.755 "trsvcid": "4420" 00:21:29.755 }, 00:21:29.755 "peer_address": { 00:21:29.755 "trtype": "TCP", 00:21:29.755 "adrfam": "IPv4", 00:21:29.755 "traddr": "10.0.0.1", 00:21:29.755 "trsvcid": "42264" 00:21:29.755 }, 00:21:29.755 "auth": { 00:21:29.755 "state": "completed", 00:21:29.755 "digest": "sha512", 00:21:29.755 "dhgroup": "ffdhe8192" 00:21:29.755 } 00:21:29.755 } 00:21:29.755 ]' 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.755 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.014 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:21:30.014 16:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: --dhchap-ctrl-secret DHHC-1:02:YjhlNDQ1OWE4YjlhMTI4MzYyNGIwMGRiYTkwNzgzNmU3M2EzZGMxZjdjODIyOTFm5BplhA==: 00:21:30.580 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.580 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:30.580 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.580 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.580 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.580 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.580 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.580 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.839 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.406 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.406 { 00:21:31.406 "cntlid": 141, 00:21:31.406 "qid": 0, 00:21:31.406 "state": "enabled", 00:21:31.406 "thread": "nvmf_tgt_poll_group_000", 00:21:31.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.406 "listen_address": { 00:21:31.406 "trtype": "TCP", 00:21:31.406 "adrfam": "IPv4", 00:21:31.406 "traddr": "10.0.0.2", 00:21:31.406 "trsvcid": "4420" 00:21:31.406 }, 00:21:31.406 "peer_address": { 00:21:31.406 "trtype": "TCP", 00:21:31.406 "adrfam": "IPv4", 00:21:31.406 "traddr": "10.0.0.1", 00:21:31.406 "trsvcid": "42298" 00:21:31.406 }, 00:21:31.406 "auth": { 00:21:31.406 "state": "completed", 00:21:31.406 "digest": "sha512", 00:21:31.406 "dhgroup": "ffdhe8192" 00:21:31.406 } 00:21:31.406 } 00:21:31.406 ]' 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.406 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.664 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.664 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.664 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.664 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.664 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.922 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:21:31.923 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:01:NzYxNDlhZDM4ZGY3NWZhOGMwYTM5MmQxMDM4MTRiYWN+NOK0: 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.489 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.490 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.490 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.056 00:21:33.056 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.056 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.056 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.315 { 00:21:33.315 "cntlid": 143, 00:21:33.315 "qid": 0, 00:21:33.315 "state": "enabled", 00:21:33.315 "thread": "nvmf_tgt_poll_group_000", 00:21:33.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:33.315 "listen_address": { 00:21:33.315 "trtype": "TCP", 00:21:33.315 "adrfam": "IPv4", 00:21:33.315 "traddr": "10.0.0.2", 00:21:33.315 "trsvcid": "4420" 00:21:33.315 }, 00:21:33.315 "peer_address": { 00:21:33.315 "trtype": "TCP", 00:21:33.315 "adrfam": "IPv4", 00:21:33.315 "traddr": "10.0.0.1", 00:21:33.315 "trsvcid": "48678" 00:21:33.315 }, 00:21:33.315 "auth": { 00:21:33.315 "state": "completed", 00:21:33.315 "digest": "sha512", 00:21:33.315 "dhgroup": "ffdhe8192" 00:21:33.315 } 00:21:33.315 } 00:21:33.315 ]' 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.315 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.573 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:33.573 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:34.140 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.140 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.140 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.140 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.140 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.140 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:34.140 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:34.140 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:34.140 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.141 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.141 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.399 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.966 00:21:34.966 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.966 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.966 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.966 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.966 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.967 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.967 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.967 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.967 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.967 { 00:21:34.967 "cntlid": 145, 00:21:34.967 "qid": 0, 00:21:34.967 "state": "enabled", 00:21:34.967 "thread": "nvmf_tgt_poll_group_000", 00:21:34.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:34.967 "listen_address": { 00:21:34.967 "trtype": "TCP", 00:21:34.967 "adrfam": "IPv4", 00:21:34.967 "traddr": "10.0.0.2", 00:21:34.967 "trsvcid": "4420" 00:21:34.967 }, 00:21:34.967 "peer_address": { 00:21:34.967 "trtype": "TCP", 00:21:34.967 "adrfam": "IPv4", 00:21:34.967 "traddr": "10.0.0.1", 00:21:34.967 "trsvcid": "48698" 00:21:34.967 }, 00:21:34.967 "auth": { 00:21:34.967 "state": "completed", 00:21:34.967 "digest": "sha512", 00:21:34.967 "dhgroup": "ffdhe8192" 00:21:34.967 } 00:21:34.967 } 00:21:34.967 ]' 00:21:34.967 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.233 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.233 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.233 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.233 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.233 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.233 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.234 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.492 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:35.492 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWVlN2RjYjFkZjA3NGQ1YTcwNGIxOThjMDQ5YjQyMTA1ZWQwMmIzNzQxOTdmZGE4FbkJHw==: --dhchap-ctrl-secret DHHC-1:03:ZmE1MWJhMzE5YWNhYjMxOTVlMmExZTUxZWU5ZTBkNjlmMTg0YzI0MGI2NWYyZTY2ZDA5NThjZmZlZjAwYzYyN1qlw+o=: 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:36.059 16:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:36.318 request: 00:21:36.318 { 00:21:36.318 "name": "nvme0", 00:21:36.318 "trtype": "tcp", 00:21:36.318 "traddr": "10.0.0.2", 00:21:36.318 "adrfam": "ipv4", 00:21:36.318 "trsvcid": "4420", 00:21:36.318 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:36.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:36.318 "prchk_reftag": false, 00:21:36.318 "prchk_guard": false, 00:21:36.318 "hdgst": false, 00:21:36.318 "ddgst": false, 00:21:36.318 "dhchap_key": "key2", 00:21:36.318 "allow_unrecognized_csi": false, 00:21:36.318 "method": "bdev_nvme_attach_controller", 00:21:36.318 "req_id": 1 00:21:36.318 } 00:21:36.318 Got JSON-RPC error response 00:21:36.318 response: 00:21:36.318 { 00:21:36.318 "code": -5, 00:21:36.318 "message": "Input/output error" 00:21:36.318 } 00:21:36.318 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:36.318 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.318 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.318 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.318 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.318 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.318 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:36.577 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:36.836 request: 00:21:36.836 { 00:21:36.836 "name": "nvme0", 00:21:36.836 "trtype": "tcp", 00:21:36.836 "traddr": "10.0.0.2", 00:21:36.836 "adrfam": "ipv4", 00:21:36.836 "trsvcid": "4420", 00:21:36.836 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:36.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:36.836 "prchk_reftag": false, 00:21:36.836 "prchk_guard": false, 00:21:36.836 "hdgst": false, 00:21:36.836 "ddgst": false, 00:21:36.836 "dhchap_key": "key1", 00:21:36.836 "dhchap_ctrlr_key": "ckey2", 00:21:36.836 "allow_unrecognized_csi": false, 00:21:36.836 "method": "bdev_nvme_attach_controller", 00:21:36.836 "req_id": 1 00:21:36.836 } 00:21:36.836 Got JSON-RPC error response 00:21:36.836 response: 00:21:36.836 { 00:21:36.836 "code": -5, 00:21:36.836 "message": "Input/output error" 00:21:36.836 } 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.836 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.404 request: 00:21:37.404 { 00:21:37.404 "name": "nvme0", 00:21:37.404 "trtype": "tcp", 00:21:37.404 "traddr": "10.0.0.2", 00:21:37.404 "adrfam": "ipv4", 00:21:37.404 "trsvcid": "4420", 00:21:37.404 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:37.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:37.404 "prchk_reftag": false, 00:21:37.404 "prchk_guard": false, 00:21:37.404 "hdgst": false, 00:21:37.404 "ddgst": false, 00:21:37.404 "dhchap_key": "key1", 00:21:37.404 "dhchap_ctrlr_key": "ckey1", 00:21:37.404 "allow_unrecognized_csi": false, 00:21:37.404 "method": "bdev_nvme_attach_controller", 00:21:37.404 "req_id": 1 00:21:37.404 } 00:21:37.404 Got JSON-RPC error response 00:21:37.404 response: 00:21:37.404 { 00:21:37.404 "code": -5, 00:21:37.404 "message": "Input/output error" 00:21:37.404 } 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 982732 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 982732 ']' 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 982732 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 982732 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 982732' 00:21:37.404 killing process with pid 982732 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 982732 00:21:37.404 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 982732 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1004270 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1004270 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1004270 ']' 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.663 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1004270 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1004270 ']' 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:37.922 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:37.922 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.922 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.181 null0 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8WQ 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.hlf ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hlf 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Sbg 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.vpg ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vpg 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5uO 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.o98 ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o98 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.d77 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.181 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.118 nvme0n1 00:21:39.118 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.118 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.118 16:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.118 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.118 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.118 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.118 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.118 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.118 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.118 { 00:21:39.118 "cntlid": 1, 00:21:39.118 "qid": 0, 00:21:39.118 "state": "enabled", 00:21:39.118 "thread": "nvmf_tgt_poll_group_000", 00:21:39.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:39.118 "listen_address": { 00:21:39.118 "trtype": "TCP", 00:21:39.118 "adrfam": "IPv4", 00:21:39.118 "traddr": "10.0.0.2", 00:21:39.118 "trsvcid": "4420" 00:21:39.118 }, 00:21:39.118 "peer_address": { 00:21:39.118 "trtype": "TCP", 00:21:39.118 "adrfam": "IPv4", 00:21:39.118 "traddr": "10.0.0.1", 00:21:39.118 "trsvcid": "48760" 00:21:39.118 }, 00:21:39.118 "auth": { 00:21:39.118 "state": "completed", 00:21:39.118 "digest": "sha512", 00:21:39.118 "dhgroup": "ffdhe8192" 00:21:39.118 } 00:21:39.118 } 00:21:39.118 ]' 00:21:39.118 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.118 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.376 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.377 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.377 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.377 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.377 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.377 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.636 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:39.636 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:40.203 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.203 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.203 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.203 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.203 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.203 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:40.203 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.203 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.203 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.203 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:40.203 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.462 request: 00:21:40.462 { 00:21:40.462 "name": "nvme0", 00:21:40.462 "trtype": "tcp", 00:21:40.462 "traddr": "10.0.0.2", 00:21:40.462 "adrfam": "ipv4", 00:21:40.462 "trsvcid": "4420", 00:21:40.462 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:40.462 "prchk_reftag": false, 00:21:40.462 "prchk_guard": false, 00:21:40.462 "hdgst": false, 00:21:40.462 "ddgst": false, 00:21:40.462 "dhchap_key": "key3", 00:21:40.462 "allow_unrecognized_csi": false, 00:21:40.462 "method": "bdev_nvme_attach_controller", 00:21:40.462 "req_id": 1 00:21:40.462 } 00:21:40.462 Got JSON-RPC error response 00:21:40.462 response: 00:21:40.462 { 00:21:40.462 "code": -5, 00:21:40.462 "message": "Input/output error" 00:21:40.462 } 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:40.462 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:40.721 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:40.721 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:40.721 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:40.721 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:40.721 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.721 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:40.721 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.721 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.721 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.721 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.980 request: 00:21:40.980 { 00:21:40.980 "name": "nvme0", 00:21:40.980 "trtype": "tcp", 00:21:40.980 "traddr": "10.0.0.2", 00:21:40.980 "adrfam": "ipv4", 00:21:40.980 "trsvcid": "4420", 00:21:40.980 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:40.980 "prchk_reftag": false, 00:21:40.980 "prchk_guard": false, 00:21:40.980 "hdgst": false, 00:21:40.980 "ddgst": false, 00:21:40.980 "dhchap_key": "key3", 00:21:40.980 "allow_unrecognized_csi": false, 00:21:40.980 "method": "bdev_nvme_attach_controller", 00:21:40.980 "req_id": 1 00:21:40.980 } 00:21:40.980 Got JSON-RPC error response 00:21:40.980 response: 00:21:40.980 { 00:21:40.980 "code": -5, 00:21:40.980 "message": "Input/output error" 00:21:40.980 } 00:21:40.980 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:40.980 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.980 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.980 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.980 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:40.980 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:40.980 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:40.980 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.980 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.980 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.239 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.497 request: 00:21:41.497 { 00:21:41.497 "name": "nvme0", 00:21:41.497 "trtype": "tcp", 00:21:41.497 "traddr": "10.0.0.2", 00:21:41.497 "adrfam": "ipv4", 00:21:41.497 "trsvcid": "4420", 00:21:41.497 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:41.497 "prchk_reftag": false, 00:21:41.497 "prchk_guard": false, 00:21:41.497 "hdgst": false, 00:21:41.497 "ddgst": false, 00:21:41.497 "dhchap_key": "key0", 00:21:41.497 "dhchap_ctrlr_key": "key1", 00:21:41.497 "allow_unrecognized_csi": false, 00:21:41.497 "method": "bdev_nvme_attach_controller", 00:21:41.497 "req_id": 1 00:21:41.497 } 00:21:41.497 Got JSON-RPC error response 00:21:41.497 response: 00:21:41.497 { 00:21:41.497 "code": -5, 00:21:41.497 "message": "Input/output error" 00:21:41.497 } 00:21:41.497 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:41.497 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.497 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.497 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.497 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:41.497 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:41.497 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:41.758 nvme0n1 00:21:41.758 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:41.758 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.758 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:42.017 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.017 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.017 16:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.275 16:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:42.275 16:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.275 16:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.275 16:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.275 16:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:42.275 16:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:42.275 16:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:42.842 nvme0n1 00:21:42.842 16:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:42.842 16:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:42.842 16:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.101 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.101 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:43.101 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.101 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.101 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.101 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:43.101 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.101 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:43.359 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.359 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:43.359 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: --dhchap-ctrl-secret DHHC-1:03:Nzg2ZTZmMzZmZjlhNzE4MGQ0MTRlNzFiMjEwNDc4YzAyYTg3MWE1Yjk0ZWNiZjI3NjEwYjE0OGE4ZDAzYjk2ZcaJcS4=: 00:21:43.926 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:43.927 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:43.927 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:43.927 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:43.927 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:43.927 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:43.927 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:43.927 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.927 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.185 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:44.185 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:44.185 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:44.185 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:44.185 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.185 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:44.185 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.185 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:44.185 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:44.185 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:44.443 request: 00:21:44.443 { 00:21:44.443 "name": "nvme0", 00:21:44.443 "trtype": "tcp", 00:21:44.443 "traddr": "10.0.0.2", 00:21:44.443 "adrfam": "ipv4", 00:21:44.443 "trsvcid": "4420", 00:21:44.443 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:44.443 "prchk_reftag": false, 00:21:44.443 "prchk_guard": false, 00:21:44.443 "hdgst": false, 00:21:44.443 "ddgst": false, 00:21:44.443 "dhchap_key": "key1", 00:21:44.443 "allow_unrecognized_csi": false, 00:21:44.443 "method": "bdev_nvme_attach_controller", 00:21:44.443 "req_id": 1 00:21:44.443 } 00:21:44.444 Got JSON-RPC error response 00:21:44.444 response: 00:21:44.444 { 00:21:44.444 "code": -5, 00:21:44.444 "message": "Input/output error" 00:21:44.444 } 00:21:44.702 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:44.702 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.702 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.702 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.702 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:44.702 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:44.702 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:45.269 nvme0n1 00:21:45.269 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:45.269 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:45.269 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.527 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.527 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.527 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.785 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.785 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.785 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.785 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.785 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:45.785 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:45.785 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:46.044 nvme0n1 00:21:46.044 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:46.044 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.044 16:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: '' 2s 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: ]] 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MWRhYjFmYWYyYmZlOTMyOWMyYzZiYTdiNDU4M2RhZjL7erm2: 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:46.302 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: 2s 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: ]] 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjdlZmU1ZDQzNWIxZWNjZGEzNTRhYTA2OWJjMmVjYzcwZTUwYjBhZTZkZjdlM2M5FCwhog==: 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:48.835 16:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:50.739 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:51.307 nvme0n1 00:21:51.307 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:51.307 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.307 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.307 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.307 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:51.307 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:51.873 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:51.873 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:51.873 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.874 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.874 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.874 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.874 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.874 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.874 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:51.874 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:52.132 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:52.132 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:52.132 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.391 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.958 request: 00:21:52.958 { 00:21:52.958 "name": "nvme0", 00:21:52.958 "dhchap_key": "key1", 00:21:52.958 "dhchap_ctrlr_key": "key3", 00:21:52.958 "method": "bdev_nvme_set_keys", 00:21:52.958 "req_id": 1 00:21:52.958 } 00:21:52.958 Got JSON-RPC error response 00:21:52.958 response: 00:21:52.958 { 00:21:52.958 "code": -13, 00:21:52.958 "message": "Permission denied" 00:21:52.958 } 00:21:52.958 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:52.958 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:52.958 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:52.958 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:52.958 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:52.958 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:52.958 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.958 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:52.958 16:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:54.335 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:54.335 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:54.335 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.335 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:54.335 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:54.335 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.335 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.335 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.335 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.335 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.335 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:54.902 nvme0n1 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:54.902 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:55.480 request: 00:21:55.480 { 00:21:55.480 "name": "nvme0", 00:21:55.480 "dhchap_key": "key2", 00:21:55.480 "dhchap_ctrlr_key": "key0", 00:21:55.480 "method": "bdev_nvme_set_keys", 00:21:55.480 "req_id": 1 00:21:55.480 } 00:21:55.480 Got JSON-RPC error response 00:21:55.480 response: 00:21:55.480 { 00:21:55.480 "code": -13, 00:21:55.480 "message": "Permission denied" 00:21:55.480 } 00:21:55.480 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:55.480 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:55.481 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:55.481 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:55.481 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:55.481 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.481 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:55.739 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:55.739 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:56.675 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:56.675 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:56.675 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.933 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:56.933 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 982912 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 982912 ']' 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 982912 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 982912 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 982912' 00:21:56.934 killing process with pid 982912 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 982912 00:21:56.934 16:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 982912 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.193 rmmod nvme_tcp 00:21:57.193 rmmod nvme_fabrics 00:21:57.193 rmmod nvme_keyring 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1004270 ']' 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1004270 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1004270 ']' 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1004270 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1004270 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1004270' 00:21:57.193 killing process with pid 1004270 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1004270 00:21:57.193 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1004270 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.452 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.8WQ /tmp/spdk.key-sha256.Sbg /tmp/spdk.key-sha384.5uO /tmp/spdk.key-sha512.d77 /tmp/spdk.key-sha512.hlf /tmp/spdk.key-sha384.vpg /tmp/spdk.key-sha256.o98 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:59.988 00:21:59.988 real 2m31.307s 00:21:59.988 user 5m49.020s 00:21:59.988 sys 0m23.951s 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.988 ************************************ 00:21:59.988 END TEST nvmf_auth_target 00:21:59.988 ************************************ 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:59.988 ************************************ 00:21:59.988 START TEST nvmf_bdevio_no_huge 00:21:59.988 ************************************ 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:59.988 * Looking for test storage... 00:21:59.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.988 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:59.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.988 --rc genhtml_branch_coverage=1 00:21:59.988 --rc genhtml_function_coverage=1 00:21:59.988 --rc genhtml_legend=1 00:21:59.988 --rc geninfo_all_blocks=1 00:21:59.988 --rc geninfo_unexecuted_blocks=1 00:21:59.988 00:21:59.988 ' 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:59.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.989 --rc genhtml_branch_coverage=1 00:21:59.989 --rc genhtml_function_coverage=1 00:21:59.989 --rc genhtml_legend=1 00:21:59.989 --rc geninfo_all_blocks=1 00:21:59.989 --rc geninfo_unexecuted_blocks=1 00:21:59.989 00:21:59.989 ' 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:59.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.989 --rc genhtml_branch_coverage=1 00:21:59.989 --rc genhtml_function_coverage=1 00:21:59.989 --rc genhtml_legend=1 00:21:59.989 --rc geninfo_all_blocks=1 00:21:59.989 --rc geninfo_unexecuted_blocks=1 00:21:59.989 00:21:59.989 ' 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:59.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.989 --rc genhtml_branch_coverage=1 00:21:59.989 --rc genhtml_function_coverage=1 00:21:59.989 --rc genhtml_legend=1 00:21:59.989 --rc geninfo_all_blocks=1 00:21:59.989 --rc geninfo_unexecuted_blocks=1 00:21:59.989 00:21:59.989 ' 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.989 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:06.558 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:06.558 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:06.558 Found net devices under 0000:af:00.0: cvl_0_0 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.558 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:06.559 Found net devices under 0000:af:00.1: cvl_0_1 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:06.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:22:06.559 00:22:06.559 --- 10.0.0.2 ping statistics --- 00:22:06.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.559 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:22:06.559 00:22:06.559 --- 10.0.0.1 ping statistics --- 00:22:06.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.559 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1010988 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1010988 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1010988 ']' 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.559 [2024-12-14 16:35:35.762108] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:06.559 [2024-12-14 16:35:35.762162] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:06.559 [2024-12-14 16:35:35.845232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.559 [2024-12-14 16:35:35.881394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.559 [2024-12-14 16:35:35.881430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.559 [2024-12-14 16:35:35.881436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.559 [2024-12-14 16:35:35.881443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.559 [2024-12-14 16:35:35.881448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.559 [2024-12-14 16:35:35.882450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:06.559 [2024-12-14 16:35:35.882591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:06.559 [2024-12-14 16:35:35.882699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.559 [2024-12-14 16:35:35.882699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.559 16:35:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.559 [2024-12-14 16:35:36.022781] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.559 Malloc0 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:06.559 [2024-12-14 16:35:36.067039] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.559 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:06.560 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:06.560 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:06.560 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:06.560 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.560 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.560 { 00:22:06.560 "params": { 00:22:06.560 "name": "Nvme$subsystem", 00:22:06.560 "trtype": "$TEST_TRANSPORT", 00:22:06.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.560 "adrfam": "ipv4", 00:22:06.560 "trsvcid": "$NVMF_PORT", 00:22:06.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.560 "hdgst": ${hdgst:-false}, 00:22:06.560 "ddgst": ${ddgst:-false} 00:22:06.560 }, 00:22:06.560 "method": "bdev_nvme_attach_controller" 00:22:06.560 } 00:22:06.560 EOF 00:22:06.560 )") 00:22:06.560 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:06.560 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:06.560 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:06.560 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:06.560 "params": { 00:22:06.560 "name": "Nvme1", 00:22:06.560 "trtype": "tcp", 00:22:06.560 "traddr": "10.0.0.2", 00:22:06.560 "adrfam": "ipv4", 00:22:06.560 "trsvcid": "4420", 00:22:06.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:06.560 "hdgst": false, 00:22:06.560 "ddgst": false 00:22:06.560 }, 00:22:06.560 "method": "bdev_nvme_attach_controller" 00:22:06.560 }' 00:22:06.560 [2024-12-14 16:35:36.117898] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:06.560 [2024-12-14 16:35:36.117941] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1011065 ] 00:22:06.560 [2024-12-14 16:35:36.196744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:06.560 [2024-12-14 16:35:36.234009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.560 [2024-12-14 16:35:36.234117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.560 [2024-12-14 16:35:36.234118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.560 I/O targets: 00:22:06.560 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:06.560 00:22:06.560 00:22:06.560 CUnit - A unit testing framework for C - Version 2.1-3 00:22:06.560 http://cunit.sourceforge.net/ 00:22:06.560 00:22:06.560 00:22:06.560 Suite: bdevio tests on: Nvme1n1 00:22:06.560 Test: blockdev write read block ...passed 00:22:06.819 Test: blockdev write zeroes read block ...passed 00:22:06.819 Test: blockdev write zeroes read no split ...passed 00:22:06.819 Test: blockdev write zeroes read split ...passed 00:22:06.819 Test: blockdev write zeroes read split partial ...passed 00:22:06.819 Test: blockdev reset ...[2024-12-14 16:35:36.683269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:06.819 [2024-12-14 16:35:36.683333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0ad00 (9): Bad file descriptor 00:22:06.819 [2024-12-14 16:35:36.735376] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:06.819 passed 00:22:06.819 Test: blockdev write read 8 blocks ...passed 00:22:06.819 Test: blockdev write read size > 128k ...passed 00:22:06.819 Test: blockdev write read invalid size ...passed 00:22:06.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:06.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:06.819 Test: blockdev write read max offset ...passed 00:22:06.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:07.078 Test: blockdev writev readv 8 blocks ...passed 00:22:07.078 Test: blockdev writev readv 30 x 1block ...passed 00:22:07.078 Test: blockdev writev readv block ...passed 00:22:07.078 Test: blockdev writev readv size > 128k ...passed 00:22:07.078 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:07.078 Test: blockdev comparev and writev ...[2024-12-14 16:35:36.987511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.078 [2024-12-14 16:35:36.987544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:07.078 [2024-12-14 16:35:36.987562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.078 [2024-12-14 16:35:36.987570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:07.078 [2024-12-14 16:35:36.987806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.078 [2024-12-14 16:35:36.987816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:07.078 [2024-12-14 16:35:36.987828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.078 [2024-12-14 16:35:36.987835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:07.078 [2024-12-14 16:35:36.988084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.078 [2024-12-14 16:35:36.988093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:07.078 [2024-12-14 16:35:36.988104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.078 [2024-12-14 16:35:36.988112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:07.078 [2024-12-14 16:35:36.988325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.078 [2024-12-14 16:35:36.988335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:07.078 [2024-12-14 16:35:36.988346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:07.078 [2024-12-14 16:35:36.988353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:07.078 passed 00:22:07.078 Test: blockdev nvme passthru rw ...passed 00:22:07.078 Test: blockdev nvme passthru vendor specific ...[2024-12-14 16:35:37.070916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.078 [2024-12-14 16:35:37.070931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:07.078 [2024-12-14 16:35:37.071034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.078 [2024-12-14 16:35:37.071044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:07.078 [2024-12-14 16:35:37.071143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.078 [2024-12-14 16:35:37.071152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:07.078 [2024-12-14 16:35:37.071250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.078 [2024-12-14 16:35:37.071259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:07.078 passed 00:22:07.078 Test: blockdev nvme admin passthru ...passed 00:22:07.078 Test: blockdev copy ...passed 00:22:07.078 00:22:07.078 Run Summary: Type Total Ran Passed Failed Inactive 00:22:07.078 suites 1 1 n/a 0 0 00:22:07.078 tests 23 23 23 0 0 00:22:07.078 asserts 152 152 152 0 n/a 00:22:07.078 00:22:07.078 Elapsed time = 1.147 seconds 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:07.337 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:07.337 rmmod nvme_tcp 00:22:07.337 rmmod nvme_fabrics 00:22:07.337 rmmod nvme_keyring 00:22:07.596 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:07.596 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:07.596 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:07.596 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1010988 ']' 00:22:07.596 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1010988 00:22:07.596 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1010988 ']' 00:22:07.596 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1010988 00:22:07.596 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:07.596 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.596 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010988 00:22:07.596 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:07.597 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:07.597 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010988' 00:22:07.597 killing process with pid 1010988 00:22:07.597 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1010988 00:22:07.597 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1010988 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.856 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.392 16:35:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:10.392 00:22:10.392 real 0m10.290s 00:22:10.392 user 0m11.480s 00:22:10.392 sys 0m5.353s 00:22:10.392 16:35:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.392 16:35:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.392 ************************************ 00:22:10.392 END TEST nvmf_bdevio_no_huge 00:22:10.392 ************************************ 00:22:10.392 16:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:10.392 16:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:10.392 16:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.392 16:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:10.392 ************************************ 00:22:10.392 START TEST nvmf_tls 00:22:10.392 ************************************ 00:22:10.392 16:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:10.392 * Looking for test storage... 00:22:10.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:10.392 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:10.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.393 --rc genhtml_branch_coverage=1 00:22:10.393 --rc genhtml_function_coverage=1 00:22:10.393 --rc genhtml_legend=1 00:22:10.393 --rc geninfo_all_blocks=1 00:22:10.393 --rc geninfo_unexecuted_blocks=1 00:22:10.393 00:22:10.393 ' 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:10.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.393 --rc genhtml_branch_coverage=1 00:22:10.393 --rc genhtml_function_coverage=1 00:22:10.393 --rc genhtml_legend=1 00:22:10.393 --rc geninfo_all_blocks=1 00:22:10.393 --rc geninfo_unexecuted_blocks=1 00:22:10.393 00:22:10.393 ' 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:10.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.393 --rc genhtml_branch_coverage=1 00:22:10.393 --rc genhtml_function_coverage=1 00:22:10.393 --rc genhtml_legend=1 00:22:10.393 --rc geninfo_all_blocks=1 00:22:10.393 --rc geninfo_unexecuted_blocks=1 00:22:10.393 00:22:10.393 ' 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:10.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.393 --rc genhtml_branch_coverage=1 00:22:10.393 --rc genhtml_function_coverage=1 00:22:10.393 --rc genhtml_legend=1 00:22:10.393 --rc geninfo_all_blocks=1 00:22:10.393 --rc geninfo_unexecuted_blocks=1 00:22:10.393 00:22:10.393 ' 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:10.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.393 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:15.668 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:15.668 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.668 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:15.669 Found net devices under 0000:af:00.0: cvl_0_0 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:15.669 Found net devices under 0000:af:00.1: cvl_0_1 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.669 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:22:15.927 00:22:15.927 --- 10.0.0.2 ping statistics --- 00:22:15.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.927 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:22:15.927 00:22:15.927 --- 10.0.0.1 ping statistics --- 00:22:15.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.927 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.927 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.928 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.928 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:15.928 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.928 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.928 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1014783 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1014783 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1014783 ']' 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.215 [2024-12-14 16:35:46.059606] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:16.215 [2024-12-14 16:35:46.059650] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.215 [2024-12-14 16:35:46.140302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.215 [2024-12-14 16:35:46.161360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.215 [2024-12-14 16:35:46.161398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.215 [2024-12-14 16:35:46.161405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.215 [2024-12-14 16:35:46.161411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.215 [2024-12-14 16:35:46.161416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.215 [2024-12-14 16:35:46.161912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:16.215 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:16.576 true 00:22:16.577 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:16.577 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:16.836 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:16.836 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:16.836 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:16.836 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:16.836 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:17.094 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:17.094 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:17.094 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:17.353 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:17.354 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:17.354 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:17.354 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:17.354 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:17.354 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:17.612 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:17.612 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:17.613 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:17.871 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:17.871 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:17.871 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:17.871 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:17.871 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:18.130 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:18.130 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Eb0yXY8bMn 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.eHbvH78FW4 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Eb0yXY8bMn 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.eHbvH78FW4 00:22:18.389 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:18.648 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:18.907 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Eb0yXY8bMn 00:22:18.907 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Eb0yXY8bMn 00:22:18.907 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.907 [2024-12-14 16:35:48.947480] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.907 16:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:19.165 16:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:19.424 [2024-12-14 16:35:49.292453] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.424 [2024-12-14 16:35:49.292660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.424 16:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:19.424 malloc0 00:22:19.424 16:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.683 16:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Eb0yXY8bMn 00:22:19.941 16:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:20.200 16:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Eb0yXY8bMn 00:22:30.173 Initializing NVMe Controllers 00:22:30.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:30.173 Initialization complete. Launching workers. 00:22:30.173 ======================================================== 00:22:30.173 Latency(us) 00:22:30.173 Device Information : IOPS MiB/s Average min max 00:22:30.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16992.39 66.38 3766.50 832.36 5396.16 00:22:30.173 ======================================================== 00:22:30.173 Total : 16992.39 66.38 3766.50 832.36 5396.16 00:22:30.173 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Eb0yXY8bMn 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Eb0yXY8bMn 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1017222 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1017222 /var/tmp/bdevperf.sock 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1017222 ']' 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.173 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.173 [2024-12-14 16:36:00.220325] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:30.173 [2024-12-14 16:36:00.220372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017222 ] 00:22:30.432 [2024-12-14 16:36:00.294246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.432 [2024-12-14 16:36:00.316310] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.432 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.432 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:30.432 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Eb0yXY8bMn 00:22:30.691 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:30.691 [2024-12-14 16:36:00.763484] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.949 TLSTESTn1 00:22:30.949 16:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:30.949 Running I/O for 10 seconds... 00:22:33.263 5376.00 IOPS, 21.00 MiB/s [2024-12-14T15:36:04.284Z] 5517.00 IOPS, 21.55 MiB/s [2024-12-14T15:36:05.220Z] 5355.33 IOPS, 20.92 MiB/s [2024-12-14T15:36:06.160Z] 5277.00 IOPS, 20.61 MiB/s [2024-12-14T15:36:07.096Z] 5201.00 IOPS, 20.32 MiB/s [2024-12-14T15:36:08.032Z] 5133.00 IOPS, 20.05 MiB/s [2024-12-14T15:36:08.967Z] 5104.86 IOPS, 19.94 MiB/s [2024-12-14T15:36:10.344Z] 5085.25 IOPS, 19.86 MiB/s [2024-12-14T15:36:11.281Z] 5092.78 IOPS, 19.89 MiB/s [2024-12-14T15:36:11.281Z] 5083.60 IOPS, 19.86 MiB/s 00:22:41.195 Latency(us) 00:22:41.195 [2024-12-14T15:36:11.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.195 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.195 Verification LBA range: start 0x0 length 0x2000 00:22:41.195 TLSTESTn1 : 10.02 5087.83 19.87 0.00 0.00 25122.00 6272.73 40694.74 00:22:41.195 [2024-12-14T15:36:11.281Z] =================================================================================================================== 00:22:41.195 [2024-12-14T15:36:11.281Z] Total : 5087.83 19.87 0.00 0.00 25122.00 6272.73 40694.74 00:22:41.195 { 00:22:41.195 "results": [ 00:22:41.195 { 00:22:41.195 "job": "TLSTESTn1", 00:22:41.195 "core_mask": "0x4", 00:22:41.195 "workload": "verify", 00:22:41.195 "status": "finished", 00:22:41.195 "verify_range": { 00:22:41.195 "start": 0, 00:22:41.195 "length": 8192 00:22:41.195 }, 00:22:41.195 "queue_depth": 128, 00:22:41.195 "io_size": 4096, 00:22:41.195 "runtime": 10.016839, 00:22:41.195 "iops": 5087.8325986870705, 00:22:41.195 "mibps": 19.87434608862137, 00:22:41.195 "io_failed": 0, 00:22:41.195 "io_timeout": 0, 00:22:41.195 "avg_latency_us": 25121.997086271916, 00:22:41.195 "min_latency_us": 6272.731428571428, 00:22:41.195 "max_latency_us": 40694.735238095236 00:22:41.195 } 00:22:41.195 ], 00:22:41.195 "core_count": 1 00:22:41.195 } 00:22:41.195 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.195 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1017222 00:22:41.195 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1017222 ']' 00:22:41.195 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1017222 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1017222 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1017222' 00:22:41.196 killing process with pid 1017222 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1017222 00:22:41.196 Received shutdown signal, test time was about 10.000000 seconds 00:22:41.196 00:22:41.196 Latency(us) 00:22:41.196 [2024-12-14T15:36:11.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.196 [2024-12-14T15:36:11.282Z] =================================================================================================================== 00:22:41.196 [2024-12-14T15:36:11.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1017222 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eHbvH78FW4 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eHbvH78FW4 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eHbvH78FW4 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.eHbvH78FW4 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1019371 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1019371 /var/tmp/bdevperf.sock 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019371 ']' 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.196 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.196 [2024-12-14 16:36:11.261381] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:41.196 [2024-12-14 16:36:11.261428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019371 ] 00:22:41.455 [2024-12-14 16:36:11.337876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.455 [2024-12-14 16:36:11.360455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.455 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.455 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:41.455 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eHbvH78FW4 00:22:41.714 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:41.714 [2024-12-14 16:36:11.795234] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.973 [2024-12-14 16:36:11.800088] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:41.973 [2024-12-14 16:36:11.800707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d90c0 (107): Transport endpoint is not connected 00:22:41.973 [2024-12-14 16:36:11.801699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d90c0 (9): Bad file descriptor 00:22:41.973 [2024-12-14 16:36:11.802700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:41.973 [2024-12-14 16:36:11.802714] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:41.973 [2024-12-14 16:36:11.802722] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:41.973 [2024-12-14 16:36:11.802730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:41.973 request: 00:22:41.973 { 00:22:41.973 "name": "TLSTEST", 00:22:41.973 "trtype": "tcp", 00:22:41.973 "traddr": "10.0.0.2", 00:22:41.973 "adrfam": "ipv4", 00:22:41.973 "trsvcid": "4420", 00:22:41.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:41.973 "prchk_reftag": false, 00:22:41.973 "prchk_guard": false, 00:22:41.973 "hdgst": false, 00:22:41.973 "ddgst": false, 00:22:41.973 "psk": "key0", 00:22:41.973 "allow_unrecognized_csi": false, 00:22:41.973 "method": "bdev_nvme_attach_controller", 00:22:41.973 "req_id": 1 00:22:41.973 } 00:22:41.973 Got JSON-RPC error response 00:22:41.973 response: 00:22:41.973 { 00:22:41.973 "code": -5, 00:22:41.973 "message": "Input/output error" 00:22:41.973 } 00:22:41.973 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1019371 00:22:41.973 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019371 ']' 00:22:41.973 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019371 00:22:41.973 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:41.973 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.973 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019371 00:22:41.973 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:41.974 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:41.974 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019371' 00:22:41.974 killing process with pid 1019371 00:22:41.974 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019371 00:22:41.974 Received shutdown signal, test time was about 10.000000 seconds 00:22:41.974 00:22:41.974 Latency(us) 00:22:41.974 [2024-12-14T15:36:12.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.974 [2024-12-14T15:36:12.060Z] =================================================================================================================== 00:22:41.974 [2024-12-14T15:36:12.060Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:41.974 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019371 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Eb0yXY8bMn 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Eb0yXY8bMn 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Eb0yXY8bMn 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Eb0yXY8bMn 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1019524 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1019524 /var/tmp/bdevperf.sock 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019524 ']' 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.974 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.233 [2024-12-14 16:36:12.080410] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:42.233 [2024-12-14 16:36:12.080456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019524 ] 00:22:42.233 [2024-12-14 16:36:12.155096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.233 [2024-12-14 16:36:12.174805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.233 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.233 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:42.233 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Eb0yXY8bMn 00:22:42.492 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:42.751 [2024-12-14 16:36:12.633754] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.751 [2024-12-14 16:36:12.639811] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:42.751 [2024-12-14 16:36:12.639833] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:42.751 [2024-12-14 16:36:12.639856] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:42.751 [2024-12-14 16:36:12.640002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169b0c0 (107): Transport endpoint is not connected 00:22:42.751 [2024-12-14 16:36:12.640995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169b0c0 (9): Bad file descriptor 00:22:42.751 [2024-12-14 16:36:12.641997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:42.751 [2024-12-14 16:36:12.642006] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:42.751 [2024-12-14 16:36:12.642013] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:42.751 [2024-12-14 16:36:12.642021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:42.751 request: 00:22:42.751 { 00:22:42.751 "name": "TLSTEST", 00:22:42.751 "trtype": "tcp", 00:22:42.751 "traddr": "10.0.0.2", 00:22:42.751 "adrfam": "ipv4", 00:22:42.751 "trsvcid": "4420", 00:22:42.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.751 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:42.751 "prchk_reftag": false, 00:22:42.751 "prchk_guard": false, 00:22:42.751 "hdgst": false, 00:22:42.751 "ddgst": false, 00:22:42.751 "psk": "key0", 00:22:42.751 "allow_unrecognized_csi": false, 00:22:42.751 "method": "bdev_nvme_attach_controller", 00:22:42.751 "req_id": 1 00:22:42.751 } 00:22:42.751 Got JSON-RPC error response 00:22:42.751 response: 00:22:42.751 { 00:22:42.751 "code": -5, 00:22:42.751 "message": "Input/output error" 00:22:42.751 } 00:22:42.751 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1019524 00:22:42.751 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019524 ']' 00:22:42.751 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019524 00:22:42.751 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:42.751 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.751 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019524 00:22:42.751 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:42.751 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:42.751 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019524' 00:22:42.751 killing process with pid 1019524 00:22:42.751 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019524 00:22:42.751 Received shutdown signal, test time was about 10.000000 seconds 00:22:42.751 00:22:42.751 Latency(us) 00:22:42.751 [2024-12-14T15:36:12.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.751 [2024-12-14T15:36:12.837Z] =================================================================================================================== 00:22:42.751 [2024-12-14T15:36:12.837Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:42.751 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019524 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Eb0yXY8bMn 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Eb0yXY8bMn 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Eb0yXY8bMn 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Eb0yXY8bMn 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1019753 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1019753 /var/tmp/bdevperf.sock 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019753 ']' 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.011 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.011 [2024-12-14 16:36:12.922462] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:43.011 [2024-12-14 16:36:12.922511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019753 ] 00:22:43.011 [2024-12-14 16:36:12.998498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.011 [2024-12-14 16:36:13.018337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.270 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.270 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:43.270 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Eb0yXY8bMn 00:22:43.270 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:43.529 [2024-12-14 16:36:13.460993] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.529 [2024-12-14 16:36:13.471915] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:43.529 [2024-12-14 16:36:13.471935] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:43.529 [2024-12-14 16:36:13.471957] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:43.529 [2024-12-14 16:36:13.472404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f80c0 (107): Transport endpoint is not connected 00:22:43.529 [2024-12-14 16:36:13.473397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f80c0 (9): Bad file descriptor 00:22:43.529 [2024-12-14 16:36:13.474399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:43.529 [2024-12-14 16:36:13.474408] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:43.529 [2024-12-14 16:36:13.474415] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:43.529 [2024-12-14 16:36:13.474423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:43.529 request: 00:22:43.529 { 00:22:43.529 "name": "TLSTEST", 00:22:43.529 "trtype": "tcp", 00:22:43.529 "traddr": "10.0.0.2", 00:22:43.529 "adrfam": "ipv4", 00:22:43.529 "trsvcid": "4420", 00:22:43.529 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:43.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.529 "prchk_reftag": false, 00:22:43.529 "prchk_guard": false, 00:22:43.529 "hdgst": false, 00:22:43.529 "ddgst": false, 00:22:43.529 "psk": "key0", 00:22:43.529 "allow_unrecognized_csi": false, 00:22:43.529 "method": "bdev_nvme_attach_controller", 00:22:43.529 "req_id": 1 00:22:43.529 } 00:22:43.529 Got JSON-RPC error response 00:22:43.529 response: 00:22:43.529 { 00:22:43.529 "code": -5, 00:22:43.529 "message": "Input/output error" 00:22:43.529 } 00:22:43.529 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1019753 00:22:43.529 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019753 ']' 00:22:43.529 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019753 00:22:43.529 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:43.529 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.529 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019753 00:22:43.529 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:43.529 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:43.529 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019753' 00:22:43.529 killing process with pid 1019753 00:22:43.529 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019753 00:22:43.529 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.529 00:22:43.529 Latency(us) 00:22:43.529 [2024-12-14T15:36:13.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.529 [2024-12-14T15:36:13.616Z] =================================================================================================================== 00:22:43.530 [2024-12-14T15:36:13.616Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:43.530 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019753 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1019768 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1019768 /var/tmp/bdevperf.sock 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019768 ']' 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.789 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.789 [2024-12-14 16:36:13.749966] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:43.789 [2024-12-14 16:36:13.750012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019768 ] 00:22:43.789 [2024-12-14 16:36:13.816019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.789 [2024-12-14 16:36:13.835578] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.048 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.048 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:44.048 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:44.048 [2024-12-14 16:36:14.110304] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:44.048 [2024-12-14 16:36:14.110333] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:44.048 request: 00:22:44.048 { 00:22:44.048 "name": "key0", 00:22:44.048 "path": "", 00:22:44.048 "method": "keyring_file_add_key", 00:22:44.048 "req_id": 1 00:22:44.048 } 00:22:44.048 Got JSON-RPC error response 00:22:44.049 response: 00:22:44.049 { 00:22:44.049 "code": -1, 00:22:44.049 "message": "Operation not permitted" 00:22:44.049 } 00:22:44.308 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.308 [2024-12-14 16:36:14.314926] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:44.308 [2024-12-14 16:36:14.314964] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:44.308 request: 00:22:44.308 { 00:22:44.308 "name": "TLSTEST", 00:22:44.308 "trtype": "tcp", 00:22:44.308 "traddr": "10.0.0.2", 00:22:44.308 "adrfam": "ipv4", 00:22:44.308 "trsvcid": "4420", 00:22:44.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.308 "prchk_reftag": false, 00:22:44.308 "prchk_guard": false, 00:22:44.308 "hdgst": false, 00:22:44.308 "ddgst": false, 00:22:44.308 "psk": "key0", 00:22:44.308 "allow_unrecognized_csi": false, 00:22:44.308 "method": "bdev_nvme_attach_controller", 00:22:44.308 "req_id": 1 00:22:44.308 } 00:22:44.308 Got JSON-RPC error response 00:22:44.308 response: 00:22:44.308 { 00:22:44.308 "code": -126, 00:22:44.308 "message": "Required key not available" 00:22:44.308 } 00:22:44.308 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1019768 00:22:44.308 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019768 ']' 00:22:44.308 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019768 00:22:44.308 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:44.308 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.308 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019768 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019768' 00:22:44.567 killing process with pid 1019768 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019768 00:22:44.567 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.567 00:22:44.567 Latency(us) 00:22:44.567 [2024-12-14T15:36:14.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.567 [2024-12-14T15:36:14.653Z] =================================================================================================================== 00:22:44.567 [2024-12-14T15:36:14.653Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019768 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1014783 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1014783 ']' 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1014783 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1014783 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1014783' 00:22:44.567 killing process with pid 1014783 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1014783 00:22:44.567 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1014783 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.faDpddw6hK 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.faDpddw6hK 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1020007 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1020007 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1020007 ']' 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.827 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.827 [2024-12-14 16:36:14.856694] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:44.827 [2024-12-14 16:36:14.856740] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.086 [2024-12-14 16:36:14.932527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.086 [2024-12-14 16:36:14.953040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.086 [2024-12-14 16:36:14.953078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.086 [2024-12-14 16:36:14.953085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.086 [2024-12-14 16:36:14.953091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.086 [2024-12-14 16:36:14.953096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.086 [2024-12-14 16:36:14.953553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.086 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.086 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:45.086 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.086 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.086 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.086 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.086 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.faDpddw6hK 00:22:45.086 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.faDpddw6hK 00:22:45.086 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:45.344 [2024-12-14 16:36:15.247353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.344 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:45.603 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:45.603 [2024-12-14 16:36:15.608271] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.603 [2024-12-14 16:36:15.608486] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.603 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:45.861 malloc0 00:22:45.862 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.121 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.faDpddw6hK 00:22:46.121 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:46.379 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.faDpddw6hK 00:22:46.379 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:46.379 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:46.379 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:46.379 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.faDpddw6hK 00:22:46.379 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:46.379 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1020257 00:22:46.379 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:46.379 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:46.380 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1020257 /var/tmp/bdevperf.sock 00:22:46.380 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1020257 ']' 00:22:46.380 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.380 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.380 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.380 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.380 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.380 [2024-12-14 16:36:16.404697] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:46.380 [2024-12-14 16:36:16.404745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020257 ] 00:22:46.638 [2024-12-14 16:36:16.480251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.638 [2024-12-14 16:36:16.502204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.638 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.638 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:46.638 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.faDpddw6hK 00:22:46.897 16:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:46.897 [2024-12-14 16:36:16.953032] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.156 TLSTESTn1 00:22:47.156 16:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:47.156 Running I/O for 10 seconds... 00:22:49.468 5363.00 IOPS, 20.95 MiB/s [2024-12-14T15:36:20.491Z] 5482.00 IOPS, 21.41 MiB/s [2024-12-14T15:36:21.427Z] 5515.33 IOPS, 21.54 MiB/s [2024-12-14T15:36:22.362Z] 5526.25 IOPS, 21.59 MiB/s [2024-12-14T15:36:23.298Z] 5541.60 IOPS, 21.65 MiB/s [2024-12-14T15:36:24.236Z] 5558.83 IOPS, 21.71 MiB/s [2024-12-14T15:36:25.176Z] 5519.71 IOPS, 21.56 MiB/s [2024-12-14T15:36:26.551Z] 5518.12 IOPS, 21.56 MiB/s [2024-12-14T15:36:27.489Z] 5531.11 IOPS, 21.61 MiB/s [2024-12-14T15:36:27.489Z] 5540.10 IOPS, 21.64 MiB/s 00:22:57.403 Latency(us) 00:22:57.403 [2024-12-14T15:36:27.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.403 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:57.403 Verification LBA range: start 0x0 length 0x2000 00:22:57.403 TLSTESTn1 : 10.02 5543.37 21.65 0.00 0.00 23055.15 6147.90 36200.84 00:22:57.403 [2024-12-14T15:36:27.489Z] =================================================================================================================== 00:22:57.403 [2024-12-14T15:36:27.489Z] Total : 5543.37 21.65 0.00 0.00 23055.15 6147.90 36200.84 00:22:57.403 { 00:22:57.403 "results": [ 00:22:57.403 { 00:22:57.403 "job": "TLSTESTn1", 00:22:57.403 "core_mask": "0x4", 00:22:57.403 "workload": "verify", 00:22:57.403 "status": "finished", 00:22:57.403 "verify_range": { 00:22:57.403 "start": 0, 00:22:57.403 "length": 8192 00:22:57.403 }, 00:22:57.403 "queue_depth": 128, 00:22:57.403 "io_size": 4096, 00:22:57.403 "runtime": 10.017185, 00:22:57.403 "iops": 5543.373712275455, 00:22:57.403 "mibps": 21.653803563575995, 00:22:57.403 "io_failed": 0, 00:22:57.403 "io_timeout": 0, 00:22:57.403 "avg_latency_us": 23055.146319254887, 00:22:57.403 "min_latency_us": 6147.900952380953, 00:22:57.403 "max_latency_us": 36200.8380952381 00:22:57.403 } 00:22:57.403 ], 00:22:57.403 "core_count": 1 00:22:57.403 } 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1020257 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1020257 ']' 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1020257 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1020257 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1020257' 00:22:57.403 killing process with pid 1020257 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1020257 00:22:57.403 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.403 00:22:57.403 Latency(us) 00:22:57.403 [2024-12-14T15:36:27.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.403 [2024-12-14T15:36:27.489Z] =================================================================================================================== 00:22:57.403 [2024-12-14T15:36:27.489Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1020257 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.faDpddw6hK 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.faDpddw6hK 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.faDpddw6hK 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.faDpddw6hK 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.faDpddw6hK 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1022041 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1022041 /var/tmp/bdevperf.sock 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1022041 ']' 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.403 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.403 [2024-12-14 16:36:27.444422] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:57.403 [2024-12-14 16:36:27.444470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022041 ] 00:22:57.662 [2024-12-14 16:36:27.511900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.662 [2024-12-14 16:36:27.533747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.662 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.662 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:57.662 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.faDpddw6hK 00:22:57.921 [2024-12-14 16:36:27.787920] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.faDpddw6hK': 0100666 00:22:57.921 [2024-12-14 16:36:27.787946] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:57.921 request: 00:22:57.921 { 00:22:57.921 "name": "key0", 00:22:57.921 "path": "/tmp/tmp.faDpddw6hK", 00:22:57.921 "method": "keyring_file_add_key", 00:22:57.921 "req_id": 1 00:22:57.921 } 00:22:57.921 Got JSON-RPC error response 00:22:57.921 response: 00:22:57.921 { 00:22:57.921 "code": -1, 00:22:57.921 "message": "Operation not permitted" 00:22:57.921 } 00:22:57.921 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.921 [2024-12-14 16:36:27.988511] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.921 [2024-12-14 16:36:27.988545] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:57.921 request: 00:22:57.921 { 00:22:57.921 "name": "TLSTEST", 00:22:57.921 "trtype": "tcp", 00:22:57.921 "traddr": "10.0.0.2", 00:22:57.921 "adrfam": "ipv4", 00:22:57.921 "trsvcid": "4420", 00:22:57.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.921 "prchk_reftag": false, 00:22:57.921 "prchk_guard": false, 00:22:57.921 "hdgst": false, 00:22:57.921 "ddgst": false, 00:22:57.921 "psk": "key0", 00:22:57.921 "allow_unrecognized_csi": false, 00:22:57.921 "method": "bdev_nvme_attach_controller", 00:22:57.921 "req_id": 1 00:22:57.921 } 00:22:57.921 Got JSON-RPC error response 00:22:57.921 response: 00:22:57.921 { 00:22:57.921 "code": -126, 00:22:57.921 "message": "Required key not available" 00:22:57.921 } 00:22:58.179 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1022041 00:22:58.179 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1022041 ']' 00:22:58.179 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1022041 00:22:58.179 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:58.179 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.179 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022041 00:22:58.179 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:58.179 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:58.179 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022041' 00:22:58.179 killing process with pid 1022041 00:22:58.179 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1022041 00:22:58.180 Received shutdown signal, test time was about 10.000000 seconds 00:22:58.180 00:22:58.180 Latency(us) 00:22:58.180 [2024-12-14T15:36:28.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.180 [2024-12-14T15:36:28.266Z] =================================================================================================================== 00:22:58.180 [2024-12-14T15:36:28.266Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1022041 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1020007 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1020007 ']' 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1020007 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.180 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1020007 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1020007' 00:22:58.438 killing process with pid 1020007 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1020007 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1020007 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1022277 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1022277 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1022277 ']' 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.438 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.438 [2024-12-14 16:36:28.484118] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:58.438 [2024-12-14 16:36:28.484163] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.697 [2024-12-14 16:36:28.559594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.697 [2024-12-14 16:36:28.580265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.698 [2024-12-14 16:36:28.580306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.698 [2024-12-14 16:36:28.580314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.698 [2024-12-14 16:36:28.580319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.698 [2024-12-14 16:36:28.580325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.698 [2024-12-14 16:36:28.580803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.faDpddw6hK 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.faDpddw6hK 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.faDpddw6hK 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.faDpddw6hK 00:22:58.698 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:58.957 [2024-12-14 16:36:28.879058] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.957 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:59.215 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:59.215 [2024-12-14 16:36:29.235969] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:59.215 [2024-12-14 16:36:29.236169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.215 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:59.474 malloc0 00:22:59.474 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:59.733 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.faDpddw6hK 00:22:59.733 [2024-12-14 16:36:29.789369] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.faDpddw6hK': 0100666 00:22:59.733 [2024-12-14 16:36:29.789396] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:59.733 request: 00:22:59.733 { 00:22:59.733 "name": "key0", 00:22:59.733 "path": "/tmp/tmp.faDpddw6hK", 00:22:59.733 "method": "keyring_file_add_key", 00:22:59.733 "req_id": 1 00:22:59.733 } 00:22:59.733 Got JSON-RPC error response 00:22:59.733 response: 00:22:59.733 { 00:22:59.733 "code": -1, 00:22:59.733 "message": "Operation not permitted" 00:22:59.733 } 00:22:59.733 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:59.993 [2024-12-14 16:36:29.977879] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:59.993 [2024-12-14 16:36:29.977914] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:59.993 request: 00:22:59.993 { 00:22:59.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.993 "host": "nqn.2016-06.io.spdk:host1", 00:22:59.993 "psk": "key0", 00:22:59.993 "method": "nvmf_subsystem_add_host", 00:22:59.993 "req_id": 1 00:22:59.993 } 00:22:59.993 Got JSON-RPC error response 00:22:59.993 response: 00:22:59.993 { 00:22:59.993 "code": -32603, 00:22:59.993 "message": "Internal error" 00:22:59.993 } 00:22:59.993 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:59.993 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.993 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.993 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.993 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1022277 00:22:59.993 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1022277 ']' 00:22:59.993 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1022277 00:22:59.993 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:59.993 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.993 16:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022277 00:22:59.993 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:59.993 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:59.993 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022277' 00:22:59.993 killing process with pid 1022277 00:22:59.993 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1022277 00:22:59.993 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1022277 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.faDpddw6hK 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1022534 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1022534 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1022534 ']' 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.252 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.252 [2024-12-14 16:36:30.266896] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:00.252 [2024-12-14 16:36:30.266947] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.252 [2024-12-14 16:36:30.330039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.511 [2024-12-14 16:36:30.350486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.511 [2024-12-14 16:36:30.350522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.511 [2024-12-14 16:36:30.350529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.511 [2024-12-14 16:36:30.350535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.511 [2024-12-14 16:36:30.350540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.511 [2024-12-14 16:36:30.351039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.511 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.511 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:00.511 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.511 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.511 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.511 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.511 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.faDpddw6hK 00:23:00.511 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.faDpddw6hK 00:23:00.511 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:00.770 [2024-12-14 16:36:30.653424] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.770 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:01.028 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:01.028 [2024-12-14 16:36:31.074511] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:01.028 [2024-12-14 16:36:31.074710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.028 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:01.287 malloc0 00:23:01.287 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:01.546 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.faDpddw6hK 00:23:01.805 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:01.805 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:01.805 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1022787 00:23:01.805 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:01.805 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1022787 /var/tmp/bdevperf.sock 00:23:01.805 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1022787 ']' 00:23:01.805 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.805 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.805 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.805 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.805 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.064 [2024-12-14 16:36:31.912501] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:02.064 [2024-12-14 16:36:31.912547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022787 ] 00:23:02.064 [2024-12-14 16:36:31.982948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.064 [2024-12-14 16:36:32.004968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.064 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.064 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:02.064 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.faDpddw6hK 00:23:02.323 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:02.581 [2024-12-14 16:36:32.492165] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.581 TLSTESTn1 00:23:02.581 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:02.841 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:02.841 "subsystems": [ 00:23:02.841 { 00:23:02.841 "subsystem": "keyring", 00:23:02.841 "config": [ 00:23:02.841 { 00:23:02.841 "method": "keyring_file_add_key", 00:23:02.841 "params": { 00:23:02.841 "name": "key0", 00:23:02.841 "path": "/tmp/tmp.faDpddw6hK" 00:23:02.841 } 00:23:02.841 } 00:23:02.841 ] 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "subsystem": "iobuf", 00:23:02.841 "config": [ 00:23:02.841 { 00:23:02.841 "method": "iobuf_set_options", 00:23:02.841 "params": { 00:23:02.841 "small_pool_count": 8192, 00:23:02.841 "large_pool_count": 1024, 00:23:02.841 "small_bufsize": 8192, 00:23:02.841 "large_bufsize": 135168, 00:23:02.841 "enable_numa": false 00:23:02.841 } 00:23:02.841 } 00:23:02.841 ] 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "subsystem": "sock", 00:23:02.841 "config": [ 00:23:02.841 { 00:23:02.841 "method": "sock_set_default_impl", 00:23:02.841 "params": { 00:23:02.841 "impl_name": "posix" 00:23:02.841 } 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "method": "sock_impl_set_options", 00:23:02.841 "params": { 00:23:02.841 "impl_name": "ssl", 00:23:02.841 "recv_buf_size": 4096, 00:23:02.841 "send_buf_size": 4096, 00:23:02.841 "enable_recv_pipe": true, 00:23:02.841 "enable_quickack": false, 00:23:02.841 "enable_placement_id": 0, 00:23:02.841 "enable_zerocopy_send_server": true, 00:23:02.841 "enable_zerocopy_send_client": false, 00:23:02.841 "zerocopy_threshold": 0, 00:23:02.841 "tls_version": 0, 00:23:02.841 "enable_ktls": false 00:23:02.841 } 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "method": "sock_impl_set_options", 00:23:02.841 "params": { 00:23:02.841 "impl_name": "posix", 00:23:02.841 "recv_buf_size": 2097152, 00:23:02.841 "send_buf_size": 2097152, 00:23:02.841 "enable_recv_pipe": true, 00:23:02.841 "enable_quickack": false, 00:23:02.841 "enable_placement_id": 0, 00:23:02.841 "enable_zerocopy_send_server": true, 00:23:02.841 "enable_zerocopy_send_client": false, 00:23:02.841 "zerocopy_threshold": 0, 00:23:02.841 "tls_version": 0, 00:23:02.841 "enable_ktls": false 00:23:02.841 } 00:23:02.841 } 00:23:02.841 ] 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "subsystem": "vmd", 00:23:02.841 "config": [] 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "subsystem": "accel", 00:23:02.841 "config": [ 00:23:02.841 { 00:23:02.841 "method": "accel_set_options", 00:23:02.841 "params": { 00:23:02.841 "small_cache_size": 128, 00:23:02.841 "large_cache_size": 16, 00:23:02.841 "task_count": 2048, 00:23:02.841 "sequence_count": 2048, 00:23:02.841 "buf_count": 2048 00:23:02.841 } 00:23:02.841 } 00:23:02.841 ] 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "subsystem": "bdev", 00:23:02.841 "config": [ 00:23:02.841 { 00:23:02.841 "method": "bdev_set_options", 00:23:02.841 "params": { 00:23:02.841 "bdev_io_pool_size": 65535, 00:23:02.841 "bdev_io_cache_size": 256, 00:23:02.841 "bdev_auto_examine": true, 00:23:02.841 "iobuf_small_cache_size": 128, 00:23:02.841 "iobuf_large_cache_size": 16 00:23:02.841 } 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "method": "bdev_raid_set_options", 00:23:02.841 "params": { 00:23:02.841 "process_window_size_kb": 1024, 00:23:02.841 "process_max_bandwidth_mb_sec": 0 00:23:02.841 } 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "method": "bdev_iscsi_set_options", 00:23:02.841 "params": { 00:23:02.841 "timeout_sec": 30 00:23:02.841 } 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "method": "bdev_nvme_set_options", 00:23:02.841 "params": { 00:23:02.841 "action_on_timeout": "none", 00:23:02.841 "timeout_us": 0, 00:23:02.841 "timeout_admin_us": 0, 00:23:02.841 "keep_alive_timeout_ms": 10000, 00:23:02.841 "arbitration_burst": 0, 00:23:02.841 "low_priority_weight": 0, 00:23:02.841 "medium_priority_weight": 0, 00:23:02.841 "high_priority_weight": 0, 00:23:02.841 "nvme_adminq_poll_period_us": 10000, 00:23:02.841 "nvme_ioq_poll_period_us": 0, 00:23:02.841 "io_queue_requests": 0, 00:23:02.841 "delay_cmd_submit": true, 00:23:02.841 "transport_retry_count": 4, 00:23:02.841 "bdev_retry_count": 3, 00:23:02.841 "transport_ack_timeout": 0, 00:23:02.841 "ctrlr_loss_timeout_sec": 0, 00:23:02.841 "reconnect_delay_sec": 0, 00:23:02.841 "fast_io_fail_timeout_sec": 0, 00:23:02.841 "disable_auto_failback": false, 00:23:02.841 "generate_uuids": false, 00:23:02.841 "transport_tos": 0, 00:23:02.841 "nvme_error_stat": false, 00:23:02.841 "rdma_srq_size": 0, 00:23:02.841 "io_path_stat": false, 00:23:02.841 "allow_accel_sequence": false, 00:23:02.841 "rdma_max_cq_size": 0, 00:23:02.841 "rdma_cm_event_timeout_ms": 0, 00:23:02.841 "dhchap_digests": [ 00:23:02.841 "sha256", 00:23:02.841 "sha384", 00:23:02.841 "sha512" 00:23:02.841 ], 00:23:02.841 "dhchap_dhgroups": [ 00:23:02.841 "null", 00:23:02.841 "ffdhe2048", 00:23:02.841 "ffdhe3072", 00:23:02.841 "ffdhe4096", 00:23:02.841 "ffdhe6144", 00:23:02.841 "ffdhe8192" 00:23:02.841 ], 00:23:02.841 "rdma_umr_per_io": false 00:23:02.841 } 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "method": "bdev_nvme_set_hotplug", 00:23:02.841 "params": { 00:23:02.841 "period_us": 100000, 00:23:02.841 "enable": false 00:23:02.841 } 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "method": "bdev_malloc_create", 00:23:02.841 "params": { 00:23:02.841 "name": "malloc0", 00:23:02.841 "num_blocks": 8192, 00:23:02.841 "block_size": 4096, 00:23:02.841 "physical_block_size": 4096, 00:23:02.841 "uuid": "11f84c52-a30c-4f7e-8b1a-760d48fb5f55", 00:23:02.841 "optimal_io_boundary": 0, 00:23:02.841 "md_size": 0, 00:23:02.841 "dif_type": 0, 00:23:02.841 "dif_is_head_of_md": false, 00:23:02.841 "dif_pi_format": 0 00:23:02.841 } 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "method": "bdev_wait_for_examine" 00:23:02.841 } 00:23:02.841 ] 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "subsystem": "nbd", 00:23:02.841 "config": [] 00:23:02.841 }, 00:23:02.841 { 00:23:02.841 "subsystem": "scheduler", 00:23:02.841 "config": [ 00:23:02.841 { 00:23:02.842 "method": "framework_set_scheduler", 00:23:02.842 "params": { 00:23:02.842 "name": "static" 00:23:02.842 } 00:23:02.842 } 00:23:02.842 ] 00:23:02.842 }, 00:23:02.842 { 00:23:02.842 "subsystem": "nvmf", 00:23:02.842 "config": [ 00:23:02.842 { 00:23:02.842 "method": "nvmf_set_config", 00:23:02.842 "params": { 00:23:02.842 "discovery_filter": "match_any", 00:23:02.842 "admin_cmd_passthru": { 00:23:02.842 "identify_ctrlr": false 00:23:02.842 }, 00:23:02.842 "dhchap_digests": [ 00:23:02.842 "sha256", 00:23:02.842 "sha384", 00:23:02.842 "sha512" 00:23:02.842 ], 00:23:02.842 "dhchap_dhgroups": [ 00:23:02.842 "null", 00:23:02.842 "ffdhe2048", 00:23:02.842 "ffdhe3072", 00:23:02.842 "ffdhe4096", 00:23:02.842 "ffdhe6144", 00:23:02.842 "ffdhe8192" 00:23:02.842 ] 00:23:02.842 } 00:23:02.842 }, 00:23:02.842 { 00:23:02.842 "method": "nvmf_set_max_subsystems", 00:23:02.842 "params": { 00:23:02.842 "max_subsystems": 1024 00:23:02.842 } 00:23:02.842 }, 00:23:02.842 { 00:23:02.842 "method": "nvmf_set_crdt", 00:23:02.842 "params": { 00:23:02.842 "crdt1": 0, 00:23:02.842 "crdt2": 0, 00:23:02.842 "crdt3": 0 00:23:02.842 } 00:23:02.842 }, 00:23:02.842 { 00:23:02.842 "method": "nvmf_create_transport", 00:23:02.842 "params": { 00:23:02.842 "trtype": "TCP", 00:23:02.842 "max_queue_depth": 128, 00:23:02.842 "max_io_qpairs_per_ctrlr": 127, 00:23:02.842 "in_capsule_data_size": 4096, 00:23:02.842 "max_io_size": 131072, 00:23:02.842 "io_unit_size": 131072, 00:23:02.842 "max_aq_depth": 128, 00:23:02.842 "num_shared_buffers": 511, 00:23:02.842 "buf_cache_size": 4294967295, 00:23:02.842 "dif_insert_or_strip": false, 00:23:02.842 "zcopy": false, 00:23:02.842 "c2h_success": false, 00:23:02.842 "sock_priority": 0, 00:23:02.842 "abort_timeout_sec": 1, 00:23:02.842 "ack_timeout": 0, 00:23:02.842 "data_wr_pool_size": 0 00:23:02.842 } 00:23:02.842 }, 00:23:02.842 { 00:23:02.842 "method": "nvmf_create_subsystem", 00:23:02.842 "params": { 00:23:02.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.842 "allow_any_host": false, 00:23:02.842 "serial_number": "SPDK00000000000001", 00:23:02.842 "model_number": "SPDK bdev Controller", 00:23:02.842 "max_namespaces": 10, 00:23:02.842 "min_cntlid": 1, 00:23:02.842 "max_cntlid": 65519, 00:23:02.842 "ana_reporting": false 00:23:02.842 } 00:23:02.842 }, 00:23:02.842 { 00:23:02.842 "method": "nvmf_subsystem_add_host", 00:23:02.842 "params": { 00:23:02.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.842 "host": "nqn.2016-06.io.spdk:host1", 00:23:02.842 "psk": "key0" 00:23:02.842 } 00:23:02.842 }, 00:23:02.842 { 00:23:02.842 "method": "nvmf_subsystem_add_ns", 00:23:02.842 "params": { 00:23:02.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.842 "namespace": { 00:23:02.842 "nsid": 1, 00:23:02.842 "bdev_name": "malloc0", 00:23:02.842 "nguid": "11F84C52A30C4F7E8B1A760D48FB5F55", 00:23:02.842 "uuid": "11f84c52-a30c-4f7e-8b1a-760d48fb5f55", 00:23:02.842 "no_auto_visible": false 00:23:02.842 } 00:23:02.842 } 00:23:02.842 }, 00:23:02.842 { 00:23:02.842 "method": "nvmf_subsystem_add_listener", 00:23:02.842 "params": { 00:23:02.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.842 "listen_address": { 00:23:02.842 "trtype": "TCP", 00:23:02.842 "adrfam": "IPv4", 00:23:02.842 "traddr": "10.0.0.2", 00:23:02.842 "trsvcid": "4420" 00:23:02.842 }, 00:23:02.842 "secure_channel": true 00:23:02.842 } 00:23:02.842 } 00:23:02.842 ] 00:23:02.842 } 00:23:02.842 ] 00:23:02.842 }' 00:23:02.842 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:03.101 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:03.101 "subsystems": [ 00:23:03.101 { 00:23:03.101 "subsystem": "keyring", 00:23:03.101 "config": [ 00:23:03.101 { 00:23:03.101 "method": "keyring_file_add_key", 00:23:03.101 "params": { 00:23:03.101 "name": "key0", 00:23:03.101 "path": "/tmp/tmp.faDpddw6hK" 00:23:03.101 } 00:23:03.101 } 00:23:03.101 ] 00:23:03.101 }, 00:23:03.101 { 00:23:03.101 "subsystem": "iobuf", 00:23:03.101 "config": [ 00:23:03.101 { 00:23:03.101 "method": "iobuf_set_options", 00:23:03.101 "params": { 00:23:03.101 "small_pool_count": 8192, 00:23:03.101 "large_pool_count": 1024, 00:23:03.101 "small_bufsize": 8192, 00:23:03.101 "large_bufsize": 135168, 00:23:03.101 "enable_numa": false 00:23:03.101 } 00:23:03.101 } 00:23:03.101 ] 00:23:03.101 }, 00:23:03.101 { 00:23:03.101 "subsystem": "sock", 00:23:03.101 "config": [ 00:23:03.101 { 00:23:03.101 "method": "sock_set_default_impl", 00:23:03.101 "params": { 00:23:03.101 "impl_name": "posix" 00:23:03.101 } 00:23:03.101 }, 00:23:03.101 { 00:23:03.101 "method": "sock_impl_set_options", 00:23:03.101 "params": { 00:23:03.101 "impl_name": "ssl", 00:23:03.102 "recv_buf_size": 4096, 00:23:03.102 "send_buf_size": 4096, 00:23:03.102 "enable_recv_pipe": true, 00:23:03.102 "enable_quickack": false, 00:23:03.102 "enable_placement_id": 0, 00:23:03.102 "enable_zerocopy_send_server": true, 00:23:03.102 "enable_zerocopy_send_client": false, 00:23:03.102 "zerocopy_threshold": 0, 00:23:03.102 "tls_version": 0, 00:23:03.102 "enable_ktls": false 00:23:03.102 } 00:23:03.102 }, 00:23:03.102 { 00:23:03.102 "method": "sock_impl_set_options", 00:23:03.102 "params": { 00:23:03.102 "impl_name": "posix", 00:23:03.102 "recv_buf_size": 2097152, 00:23:03.102 "send_buf_size": 2097152, 00:23:03.102 "enable_recv_pipe": true, 00:23:03.102 "enable_quickack": false, 00:23:03.102 "enable_placement_id": 0, 00:23:03.102 "enable_zerocopy_send_server": true, 00:23:03.102 "enable_zerocopy_send_client": false, 00:23:03.102 "zerocopy_threshold": 0, 00:23:03.102 "tls_version": 0, 00:23:03.102 "enable_ktls": false 00:23:03.102 } 00:23:03.102 } 00:23:03.102 ] 00:23:03.102 }, 00:23:03.102 { 00:23:03.102 "subsystem": "vmd", 00:23:03.102 "config": [] 00:23:03.102 }, 00:23:03.102 { 00:23:03.102 "subsystem": "accel", 00:23:03.102 "config": [ 00:23:03.102 { 00:23:03.102 "method": "accel_set_options", 00:23:03.102 "params": { 00:23:03.102 "small_cache_size": 128, 00:23:03.102 "large_cache_size": 16, 00:23:03.102 "task_count": 2048, 00:23:03.102 "sequence_count": 2048, 00:23:03.102 "buf_count": 2048 00:23:03.102 } 00:23:03.102 } 00:23:03.102 ] 00:23:03.102 }, 00:23:03.102 { 00:23:03.102 "subsystem": "bdev", 00:23:03.102 "config": [ 00:23:03.102 { 00:23:03.102 "method": "bdev_set_options", 00:23:03.102 "params": { 00:23:03.102 "bdev_io_pool_size": 65535, 00:23:03.102 "bdev_io_cache_size": 256, 00:23:03.102 "bdev_auto_examine": true, 00:23:03.102 "iobuf_small_cache_size": 128, 00:23:03.102 "iobuf_large_cache_size": 16 00:23:03.102 } 00:23:03.102 }, 00:23:03.102 { 00:23:03.102 "method": "bdev_raid_set_options", 00:23:03.102 "params": { 00:23:03.102 "process_window_size_kb": 1024, 00:23:03.102 "process_max_bandwidth_mb_sec": 0 00:23:03.102 } 00:23:03.102 }, 00:23:03.102 { 00:23:03.102 "method": "bdev_iscsi_set_options", 00:23:03.102 "params": { 00:23:03.102 "timeout_sec": 30 00:23:03.102 } 00:23:03.102 }, 00:23:03.102 { 00:23:03.102 "method": "bdev_nvme_set_options", 00:23:03.102 "params": { 00:23:03.102 "action_on_timeout": "none", 00:23:03.102 "timeout_us": 0, 00:23:03.102 "timeout_admin_us": 0, 00:23:03.102 "keep_alive_timeout_ms": 10000, 00:23:03.102 "arbitration_burst": 0, 00:23:03.102 "low_priority_weight": 0, 00:23:03.102 "medium_priority_weight": 0, 00:23:03.102 "high_priority_weight": 0, 00:23:03.102 "nvme_adminq_poll_period_us": 10000, 00:23:03.102 "nvme_ioq_poll_period_us": 0, 00:23:03.102 "io_queue_requests": 512, 00:23:03.102 "delay_cmd_submit": true, 00:23:03.102 "transport_retry_count": 4, 00:23:03.102 "bdev_retry_count": 3, 00:23:03.102 "transport_ack_timeout": 0, 00:23:03.102 "ctrlr_loss_timeout_sec": 0, 00:23:03.102 "reconnect_delay_sec": 0, 00:23:03.102 "fast_io_fail_timeout_sec": 0, 00:23:03.102 "disable_auto_failback": false, 00:23:03.102 "generate_uuids": false, 00:23:03.102 "transport_tos": 0, 00:23:03.102 "nvme_error_stat": false, 00:23:03.102 "rdma_srq_size": 0, 00:23:03.102 "io_path_stat": false, 00:23:03.102 "allow_accel_sequence": false, 00:23:03.102 "rdma_max_cq_size": 0, 00:23:03.102 "rdma_cm_event_timeout_ms": 0, 00:23:03.102 "dhchap_digests": [ 00:23:03.102 "sha256", 00:23:03.102 "sha384", 00:23:03.102 "sha512" 00:23:03.102 ], 00:23:03.102 "dhchap_dhgroups": [ 00:23:03.102 "null", 00:23:03.102 "ffdhe2048", 00:23:03.102 "ffdhe3072", 00:23:03.102 "ffdhe4096", 00:23:03.102 "ffdhe6144", 00:23:03.102 "ffdhe8192" 00:23:03.102 ], 00:23:03.102 "rdma_umr_per_io": false 00:23:03.102 } 00:23:03.102 }, 00:23:03.102 { 00:23:03.102 "method": "bdev_nvme_attach_controller", 00:23:03.102 "params": { 00:23:03.102 "name": "TLSTEST", 00:23:03.102 "trtype": "TCP", 00:23:03.102 "adrfam": "IPv4", 00:23:03.102 "traddr": "10.0.0.2", 00:23:03.102 "trsvcid": "4420", 00:23:03.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.102 "prchk_reftag": false, 00:23:03.102 "prchk_guard": false, 00:23:03.102 "ctrlr_loss_timeout_sec": 0, 00:23:03.102 "reconnect_delay_sec": 0, 00:23:03.102 "fast_io_fail_timeout_sec": 0, 00:23:03.102 "psk": "key0", 00:23:03.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.102 "hdgst": false, 00:23:03.102 "ddgst": false, 00:23:03.102 "multipath": "multipath" 00:23:03.102 } 00:23:03.102 }, 00:23:03.102 { 00:23:03.102 "method": "bdev_nvme_set_hotplug", 00:23:03.102 "params": { 00:23:03.102 "period_us": 100000, 00:23:03.102 "enable": false 00:23:03.102 } 00:23:03.102 }, 00:23:03.102 { 00:23:03.102 "method": "bdev_wait_for_examine" 00:23:03.102 } 00:23:03.102 ] 00:23:03.102 }, 00:23:03.102 { 00:23:03.102 "subsystem": "nbd", 00:23:03.102 "config": [] 00:23:03.102 } 00:23:03.102 ] 00:23:03.102 }' 00:23:03.102 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1022787 00:23:03.102 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1022787 ']' 00:23:03.102 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1022787 00:23:03.102 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:03.102 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.102 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022787 00:23:03.102 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:03.102 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:03.102 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022787' 00:23:03.102 killing process with pid 1022787 00:23:03.102 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1022787 00:23:03.102 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.102 00:23:03.102 Latency(us) 00:23:03.102 [2024-12-14T15:36:33.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.102 [2024-12-14T15:36:33.188Z] =================================================================================================================== 00:23:03.102 [2024-12-14T15:36:33.188Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:03.102 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1022787 00:23:03.361 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1022534 00:23:03.361 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1022534 ']' 00:23:03.361 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1022534 00:23:03.362 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:03.362 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.362 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022534 00:23:03.362 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:03.362 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:03.362 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022534' 00:23:03.362 killing process with pid 1022534 00:23:03.362 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1022534 00:23:03.362 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1022534 00:23:03.621 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:03.621 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:03.621 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.621 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:03.621 "subsystems": [ 00:23:03.621 { 00:23:03.621 "subsystem": "keyring", 00:23:03.621 "config": [ 00:23:03.621 { 00:23:03.621 "method": "keyring_file_add_key", 00:23:03.621 "params": { 00:23:03.621 "name": "key0", 00:23:03.621 "path": "/tmp/tmp.faDpddw6hK" 00:23:03.621 } 00:23:03.621 } 00:23:03.621 ] 00:23:03.621 }, 00:23:03.621 { 00:23:03.621 "subsystem": "iobuf", 00:23:03.621 "config": [ 00:23:03.621 { 00:23:03.621 "method": "iobuf_set_options", 00:23:03.621 "params": { 00:23:03.621 "small_pool_count": 8192, 00:23:03.621 "large_pool_count": 1024, 00:23:03.621 "small_bufsize": 8192, 00:23:03.621 "large_bufsize": 135168, 00:23:03.621 "enable_numa": false 00:23:03.621 } 00:23:03.621 } 00:23:03.621 ] 00:23:03.621 }, 00:23:03.621 { 00:23:03.621 "subsystem": "sock", 00:23:03.621 "config": [ 00:23:03.621 { 00:23:03.621 "method": "sock_set_default_impl", 00:23:03.621 "params": { 00:23:03.621 "impl_name": "posix" 00:23:03.621 } 00:23:03.621 }, 00:23:03.621 { 00:23:03.621 "method": "sock_impl_set_options", 00:23:03.621 "params": { 00:23:03.621 "impl_name": "ssl", 00:23:03.621 "recv_buf_size": 4096, 00:23:03.621 "send_buf_size": 4096, 00:23:03.621 "enable_recv_pipe": true, 00:23:03.621 "enable_quickack": false, 00:23:03.621 "enable_placement_id": 0, 00:23:03.621 "enable_zerocopy_send_server": true, 00:23:03.621 "enable_zerocopy_send_client": false, 00:23:03.622 "zerocopy_threshold": 0, 00:23:03.622 "tls_version": 0, 00:23:03.622 "enable_ktls": false 00:23:03.622 } 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "method": "sock_impl_set_options", 00:23:03.622 "params": { 00:23:03.622 "impl_name": "posix", 00:23:03.622 "recv_buf_size": 2097152, 00:23:03.622 "send_buf_size": 2097152, 00:23:03.622 "enable_recv_pipe": true, 00:23:03.622 "enable_quickack": false, 00:23:03.622 "enable_placement_id": 0, 00:23:03.622 "enable_zerocopy_send_server": true, 00:23:03.622 "enable_zerocopy_send_client": false, 00:23:03.622 "zerocopy_threshold": 0, 00:23:03.622 "tls_version": 0, 00:23:03.622 "enable_ktls": false 00:23:03.622 } 00:23:03.622 } 00:23:03.622 ] 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "subsystem": "vmd", 00:23:03.622 "config": [] 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "subsystem": "accel", 00:23:03.622 "config": [ 00:23:03.622 { 00:23:03.622 "method": "accel_set_options", 00:23:03.622 "params": { 00:23:03.622 "small_cache_size": 128, 00:23:03.622 "large_cache_size": 16, 00:23:03.622 "task_count": 2048, 00:23:03.622 "sequence_count": 2048, 00:23:03.622 "buf_count": 2048 00:23:03.622 } 00:23:03.622 } 00:23:03.622 ] 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "subsystem": "bdev", 00:23:03.622 "config": [ 00:23:03.622 { 00:23:03.622 "method": "bdev_set_options", 00:23:03.622 "params": { 00:23:03.622 "bdev_io_pool_size": 65535, 00:23:03.622 "bdev_io_cache_size": 256, 00:23:03.622 "bdev_auto_examine": true, 00:23:03.622 "iobuf_small_cache_size": 128, 00:23:03.622 "iobuf_large_cache_size": 16 00:23:03.622 } 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "method": "bdev_raid_set_options", 00:23:03.622 "params": { 00:23:03.622 "process_window_size_kb": 1024, 00:23:03.622 "process_max_bandwidth_mb_sec": 0 00:23:03.622 } 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "method": "bdev_iscsi_set_options", 00:23:03.622 "params": { 00:23:03.622 "timeout_sec": 30 00:23:03.622 } 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "method": "bdev_nvme_set_options", 00:23:03.622 "params": { 00:23:03.622 "action_on_timeout": "none", 00:23:03.622 "timeout_us": 0, 00:23:03.622 "timeout_admin_us": 0, 00:23:03.622 "keep_alive_timeout_ms": 10000, 00:23:03.622 "arbitration_burst": 0, 00:23:03.622 "low_priority_weight": 0, 00:23:03.622 "medium_priority_weight": 0, 00:23:03.622 "high_priority_weight": 0, 00:23:03.622 "nvme_adminq_poll_period_us": 10000, 00:23:03.622 "nvme_ioq_poll_period_us": 0, 00:23:03.622 "io_queue_requests": 0, 00:23:03.622 "delay_cmd_submit": true, 00:23:03.622 "transport_retry_count": 4, 00:23:03.622 "bdev_retry_count": 3, 00:23:03.622 "transport_ack_timeout": 0, 00:23:03.622 "ctrlr_loss_timeout_sec": 0, 00:23:03.622 "reconnect_delay_sec": 0, 00:23:03.622 "fast_io_fail_timeout_sec": 0, 00:23:03.622 "disable_auto_failback": false, 00:23:03.622 "generate_uuids": false, 00:23:03.622 "transport_tos": 0, 00:23:03.622 "nvme_error_stat": false, 00:23:03.622 "rdma_srq_size": 0, 00:23:03.622 "io_path_stat": false, 00:23:03.622 "allow_accel_sequence": false, 00:23:03.622 "rdma_max_cq_size": 0, 00:23:03.622 "rdma_cm_event_timeout_ms": 0, 00:23:03.622 "dhchap_digests": [ 00:23:03.622 "sha256", 00:23:03.622 "sha384", 00:23:03.622 "sha512" 00:23:03.622 ], 00:23:03.622 "dhchap_dhgroups": [ 00:23:03.622 "null", 00:23:03.622 "ffdhe2048", 00:23:03.622 "ffdhe3072", 00:23:03.622 "ffdhe4096", 00:23:03.622 "ffdhe6144", 00:23:03.622 "ffdhe8192" 00:23:03.622 ], 00:23:03.622 "rdma_umr_per_io": false 00:23:03.622 } 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "method": "bdev_nvme_set_hotplug", 00:23:03.622 "params": { 00:23:03.622 "period_us": 100000, 00:23:03.622 "enable": false 00:23:03.622 } 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "method": "bdev_malloc_create", 00:23:03.622 "params": { 00:23:03.622 "name": "malloc0", 00:23:03.622 "num_blocks": 8192, 00:23:03.622 "block_size": 4096, 00:23:03.622 "physical_block_size": 4096, 00:23:03.622 "uuid": "11f84c52-a30c-4f7e-8b1a-760d48fb5f55", 00:23:03.622 "optimal_io_boundary": 0, 00:23:03.622 "md_size": 0, 00:23:03.622 "dif_type": 0, 00:23:03.622 "dif_is_head_of_md": false, 00:23:03.622 "dif_pi_format": 0 00:23:03.622 } 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "method": "bdev_wait_for_examine" 00:23:03.622 } 00:23:03.622 ] 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "subsystem": "nbd", 00:23:03.622 "config": [] 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "subsystem": "scheduler", 00:23:03.622 "config": [ 00:23:03.622 { 00:23:03.622 "method": "framework_set_scheduler", 00:23:03.622 "params": { 00:23:03.622 "name": "static" 00:23:03.622 } 00:23:03.622 } 00:23:03.622 ] 00:23:03.622 }, 00:23:03.622 { 00:23:03.622 "subsystem": "nvmf", 00:23:03.622 "config": [ 00:23:03.622 { 00:23:03.622 "method": "nvmf_set_config", 00:23:03.622 "params": { 00:23:03.623 "discovery_filter": "match_any", 00:23:03.623 "admin_cmd_passthru": { 00:23:03.623 "identify_ctrlr": false 00:23:03.623 }, 00:23:03.623 "dhchap_digests": [ 00:23:03.623 "sha256", 00:23:03.623 "sha384", 00:23:03.623 "sha512" 00:23:03.623 ], 00:23:03.623 "dhchap_dhgroups": [ 00:23:03.623 "null", 00:23:03.623 "ffdhe2048", 00:23:03.623 "ffdhe3072", 00:23:03.623 "ffdhe4096", 00:23:03.623 "ffdhe6144", 00:23:03.623 "ffdhe8192" 00:23:03.623 ] 00:23:03.623 } 00:23:03.623 }, 00:23:03.623 { 00:23:03.623 "method": "nvmf_set_max_subsystems", 00:23:03.623 "params": { 00:23:03.623 "max_subsystems": 1024 00:23:03.623 } 00:23:03.623 }, 00:23:03.623 { 00:23:03.623 "method": "nvmf_set_crdt", 00:23:03.623 "params": { 00:23:03.623 "crdt1": 0, 00:23:03.623 "crdt2": 0, 00:23:03.623 "crdt3": 0 00:23:03.623 } 00:23:03.623 }, 00:23:03.623 { 00:23:03.623 "method": "nvmf_create_transport", 00:23:03.623 "params": { 00:23:03.623 "trtype": "TCP", 00:23:03.623 "max_queue_depth": 128, 00:23:03.623 "max_io_qpairs_per_ctrlr": 127, 00:23:03.623 "in_capsule_data_size": 4096, 00:23:03.623 "max_io_size": 131072, 00:23:03.623 "io_unit_size": 131072, 00:23:03.623 "max_aq_depth": 128, 00:23:03.623 "num_shared_buffers": 511, 00:23:03.623 "buf_cache_size": 4294967295, 00:23:03.623 "dif_insert_or_strip": false, 00:23:03.623 "zcopy": false, 00:23:03.623 "c2h_success": false, 00:23:03.623 "sock_priority": 0, 00:23:03.623 "abort_timeout_sec": 1, 00:23:03.623 "ack_timeout": 0, 00:23:03.623 "data_wr_pool_size": 0 00:23:03.623 } 00:23:03.623 }, 00:23:03.623 { 00:23:03.623 "method": "nvmf_create_subsystem", 00:23:03.623 "params": { 00:23:03.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.623 "allow_any_host": false, 00:23:03.623 "serial_number": "SPDK00000000000001", 00:23:03.623 "model_number": "SPDK bdev Controller", 00:23:03.623 "max_namespaces": 10, 00:23:03.623 "min_cntlid": 1, 00:23:03.623 "max_cntlid": 65519, 00:23:03.623 "ana_reporting": false 00:23:03.623 } 00:23:03.623 }, 00:23:03.623 { 00:23:03.623 "method": "nvmf_subsystem_add_host", 00:23:03.623 "params": { 00:23:03.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.623 "host": "nqn.2016-06.io.spdk:host1", 00:23:03.623 "psk": "key0" 00:23:03.623 } 00:23:03.623 }, 00:23:03.623 { 00:23:03.623 "method": "nvmf_subsystem_add_ns", 00:23:03.623 "params": { 00:23:03.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.623 "namespace": { 00:23:03.623 "nsid": 1, 00:23:03.623 "bdev_name": "malloc0", 00:23:03.623 "nguid": "11F84C52A30C4F7E8B1A760D48FB5F55", 00:23:03.623 "uuid": "11f84c52-a30c-4f7e-8b1a-760d48fb5f55", 00:23:03.623 "no_auto_visible": false 00:23:03.623 } 00:23:03.623 } 00:23:03.623 }, 00:23:03.623 { 00:23:03.623 "method": "nvmf_subsystem_add_listener", 00:23:03.623 "params": { 00:23:03.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.623 "listen_address": { 00:23:03.623 "trtype": "TCP", 00:23:03.623 "adrfam": "IPv4", 00:23:03.623 "traddr": "10.0.0.2", 00:23:03.623 "trsvcid": "4420" 00:23:03.623 }, 00:23:03.623 "secure_channel": true 00:23:03.623 } 00:23:03.623 } 00:23:03.623 ] 00:23:03.623 } 00:23:03.623 ] 00:23:03.623 }' 00:23:03.623 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.623 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1023063 00:23:03.623 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1023063 00:23:03.623 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:03.623 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1023063 ']' 00:23:03.623 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.623 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.623 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.623 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.623 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.623 [2024-12-14 16:36:33.592824] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:03.623 [2024-12-14 16:36:33.592871] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.623 [2024-12-14 16:36:33.671128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.623 [2024-12-14 16:36:33.690307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.623 [2024-12-14 16:36:33.690345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.623 [2024-12-14 16:36:33.690352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.623 [2024-12-14 16:36:33.690357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.623 [2024-12-14 16:36:33.690362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.623 [2024-12-14 16:36:33.690884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.937 [2024-12-14 16:36:33.899732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.937 [2024-12-14 16:36:33.931746] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:03.937 [2024-12-14 16:36:33.931930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1023280 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1023280 /var/tmp/bdevperf.sock 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1023280 ']' 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.556 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:04.556 "subsystems": [ 00:23:04.556 { 00:23:04.556 "subsystem": "keyring", 00:23:04.556 "config": [ 00:23:04.556 { 00:23:04.556 "method": "keyring_file_add_key", 00:23:04.556 "params": { 00:23:04.556 "name": "key0", 00:23:04.556 "path": "/tmp/tmp.faDpddw6hK" 00:23:04.556 } 00:23:04.556 } 00:23:04.556 ] 00:23:04.556 }, 00:23:04.556 { 00:23:04.556 "subsystem": "iobuf", 00:23:04.556 "config": [ 00:23:04.556 { 00:23:04.556 "method": "iobuf_set_options", 00:23:04.556 "params": { 00:23:04.556 "small_pool_count": 8192, 00:23:04.556 "large_pool_count": 1024, 00:23:04.556 "small_bufsize": 8192, 00:23:04.556 "large_bufsize": 135168, 00:23:04.556 "enable_numa": false 00:23:04.556 } 00:23:04.556 } 00:23:04.557 ] 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "subsystem": "sock", 00:23:04.557 "config": [ 00:23:04.557 { 00:23:04.557 "method": "sock_set_default_impl", 00:23:04.557 "params": { 00:23:04.557 "impl_name": "posix" 00:23:04.557 } 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "method": "sock_impl_set_options", 00:23:04.557 "params": { 00:23:04.557 "impl_name": "ssl", 00:23:04.557 "recv_buf_size": 4096, 00:23:04.557 "send_buf_size": 4096, 00:23:04.557 "enable_recv_pipe": true, 00:23:04.557 "enable_quickack": false, 00:23:04.557 "enable_placement_id": 0, 00:23:04.557 "enable_zerocopy_send_server": true, 00:23:04.557 "enable_zerocopy_send_client": false, 00:23:04.557 "zerocopy_threshold": 0, 00:23:04.557 "tls_version": 0, 00:23:04.557 "enable_ktls": false 00:23:04.557 } 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "method": "sock_impl_set_options", 00:23:04.557 "params": { 00:23:04.557 "impl_name": "posix", 00:23:04.557 "recv_buf_size": 2097152, 00:23:04.557 "send_buf_size": 2097152, 00:23:04.557 "enable_recv_pipe": true, 00:23:04.557 "enable_quickack": false, 00:23:04.557 "enable_placement_id": 0, 00:23:04.557 "enable_zerocopy_send_server": true, 00:23:04.557 "enable_zerocopy_send_client": false, 00:23:04.557 "zerocopy_threshold": 0, 00:23:04.557 "tls_version": 0, 00:23:04.557 "enable_ktls": false 00:23:04.557 } 00:23:04.557 } 00:23:04.557 ] 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "subsystem": "vmd", 00:23:04.557 "config": [] 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "subsystem": "accel", 00:23:04.557 "config": [ 00:23:04.557 { 00:23:04.557 "method": "accel_set_options", 00:23:04.557 "params": { 00:23:04.557 "small_cache_size": 128, 00:23:04.557 "large_cache_size": 16, 00:23:04.557 "task_count": 2048, 00:23:04.557 "sequence_count": 2048, 00:23:04.557 "buf_count": 2048 00:23:04.557 } 00:23:04.557 } 00:23:04.557 ] 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "subsystem": "bdev", 00:23:04.557 "config": [ 00:23:04.557 { 00:23:04.557 "method": "bdev_set_options", 00:23:04.557 "params": { 00:23:04.557 "bdev_io_pool_size": 65535, 00:23:04.557 "bdev_io_cache_size": 256, 00:23:04.557 "bdev_auto_examine": true, 00:23:04.557 "iobuf_small_cache_size": 128, 00:23:04.557 "iobuf_large_cache_size": 16 00:23:04.557 } 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "method": "bdev_raid_set_options", 00:23:04.557 "params": { 00:23:04.557 "process_window_size_kb": 1024, 00:23:04.557 "process_max_bandwidth_mb_sec": 0 00:23:04.557 } 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "method": "bdev_iscsi_set_options", 00:23:04.557 "params": { 00:23:04.557 "timeout_sec": 30 00:23:04.557 } 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "method": "bdev_nvme_set_options", 00:23:04.557 "params": { 00:23:04.557 "action_on_timeout": "none", 00:23:04.557 "timeout_us": 0, 00:23:04.557 "timeout_admin_us": 0, 00:23:04.557 "keep_alive_timeout_ms": 10000, 00:23:04.557 "arbitration_burst": 0, 00:23:04.557 "low_priority_weight": 0, 00:23:04.557 "medium_priority_weight": 0, 00:23:04.557 "high_priority_weight": 0, 00:23:04.557 "nvme_adminq_poll_period_us": 10000, 00:23:04.557 "nvme_ioq_poll_period_us": 0, 00:23:04.557 "io_queue_requests": 512, 00:23:04.557 "delay_cmd_submit": true, 00:23:04.557 "transport_retry_count": 4, 00:23:04.557 "bdev_retry_count": 3, 00:23:04.557 "transport_ack_timeout": 0, 00:23:04.557 "ctrlr_loss_timeout_sec": 0, 00:23:04.557 "reconnect_delay_sec": 0, 00:23:04.557 "fast_io_fail_timeout_sec": 0, 00:23:04.557 "disable_auto_failback": false, 00:23:04.557 "generate_uuids": false, 00:23:04.557 "transport_tos": 0, 00:23:04.557 "nvme_error_stat": false, 00:23:04.557 "rdma_srq_size": 0, 00:23:04.557 "io_path_stat": false, 00:23:04.557 "allow_accel_sequence": false, 00:23:04.557 "rdma_max_cq_size": 0, 00:23:04.557 "rdma_cm_event_timeout_ms": 0, 00:23:04.557 "dhchap_digests": [ 00:23:04.557 "sha256", 00:23:04.557 "sha384", 00:23:04.557 "sha512" 00:23:04.557 ], 00:23:04.557 "dhchap_dhgroups": [ 00:23:04.557 "null", 00:23:04.557 "ffdhe2048", 00:23:04.557 "ffdhe3072", 00:23:04.557 "ffdhe4096", 00:23:04.557 "ffdhe6144", 00:23:04.557 "ffdhe8192" 00:23:04.557 ], 00:23:04.557 "rdma_umr_per_io": false 00:23:04.557 } 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "method": "bdev_nvme_attach_controller", 00:23:04.557 "params": { 00:23:04.557 "name": "TLSTEST", 00:23:04.557 "trtype": "TCP", 00:23:04.557 "adrfam": "IPv4", 00:23:04.557 "traddr": "10.0.0.2", 00:23:04.557 "trsvcid": "4420", 00:23:04.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.557 "prchk_reftag": false, 00:23:04.557 "prchk_guard": false, 00:23:04.557 "ctrlr_loss_timeout_sec": 0, 00:23:04.557 "reconnect_delay_sec": 0, 00:23:04.557 "fast_io_fail_timeout_sec": 0, 00:23:04.557 "psk": "key0", 00:23:04.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.557 "hdgst": false, 00:23:04.557 "ddgst": false, 00:23:04.557 "multipath": "multipath" 00:23:04.557 } 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "method": "bdev_nvme_set_hotplug", 00:23:04.557 "params": { 00:23:04.557 "period_us": 100000, 00:23:04.557 "enable": false 00:23:04.557 } 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "method": "bdev_wait_for_examine" 00:23:04.557 } 00:23:04.557 ] 00:23:04.557 }, 00:23:04.557 { 00:23:04.557 "subsystem": "nbd", 00:23:04.557 "config": [] 00:23:04.557 } 00:23:04.557 ] 00:23:04.557 }' 00:23:04.557 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.557 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.557 [2024-12-14 16:36:34.516609] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:04.557 [2024-12-14 16:36:34.516656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023280 ] 00:23:04.557 [2024-12-14 16:36:34.590741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.557 [2024-12-14 16:36:34.613304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.815 [2024-12-14 16:36:34.760898] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.381 16:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.381 16:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:05.381 16:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:05.381 Running I/O for 10 seconds... 00:23:07.692 5322.00 IOPS, 20.79 MiB/s [2024-12-14T15:36:38.715Z] 5430.00 IOPS, 21.21 MiB/s [2024-12-14T15:36:39.650Z] 5481.33 IOPS, 21.41 MiB/s [2024-12-14T15:36:40.586Z] 5492.50 IOPS, 21.46 MiB/s [2024-12-14T15:36:41.522Z] 5503.60 IOPS, 21.50 MiB/s [2024-12-14T15:36:42.899Z] 5489.83 IOPS, 21.44 MiB/s [2024-12-14T15:36:43.835Z] 5500.57 IOPS, 21.49 MiB/s [2024-12-14T15:36:44.770Z] 5518.00 IOPS, 21.55 MiB/s [2024-12-14T15:36:45.706Z] 5524.44 IOPS, 21.58 MiB/s [2024-12-14T15:36:45.706Z] 5521.10 IOPS, 21.57 MiB/s 00:23:15.620 Latency(us) 00:23:15.620 [2024-12-14T15:36:45.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.620 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:15.620 Verification LBA range: start 0x0 length 0x2000 00:23:15.620 TLSTESTn1 : 10.01 5526.77 21.59 0.00 0.00 23126.34 4774.77 22968.81 00:23:15.620 [2024-12-14T15:36:45.706Z] =================================================================================================================== 00:23:15.620 [2024-12-14T15:36:45.706Z] Total : 5526.77 21.59 0.00 0.00 23126.34 4774.77 22968.81 00:23:15.620 { 00:23:15.620 "results": [ 00:23:15.620 { 00:23:15.620 "job": "TLSTESTn1", 00:23:15.620 "core_mask": "0x4", 00:23:15.620 "workload": "verify", 00:23:15.620 "status": "finished", 00:23:15.620 "verify_range": { 00:23:15.620 "start": 0, 00:23:15.620 "length": 8192 00:23:15.620 }, 00:23:15.620 "queue_depth": 128, 00:23:15.620 "io_size": 4096, 00:23:15.620 "runtime": 10.012186, 00:23:15.620 "iops": 5526.765084068554, 00:23:15.620 "mibps": 21.588926109642788, 00:23:15.620 "io_failed": 0, 00:23:15.620 "io_timeout": 0, 00:23:15.620 "avg_latency_us": 23126.34363488191, 00:23:15.620 "min_latency_us": 4774.765714285714, 00:23:15.620 "max_latency_us": 22968.80761904762 00:23:15.620 } 00:23:15.620 ], 00:23:15.620 "core_count": 1 00:23:15.620 } 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1023280 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1023280 ']' 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1023280 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1023280 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1023280' 00:23:15.620 killing process with pid 1023280 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1023280 00:23:15.620 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.620 00:23:15.620 Latency(us) 00:23:15.620 [2024-12-14T15:36:45.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.620 [2024-12-14T15:36:45.706Z] =================================================================================================================== 00:23:15.620 [2024-12-14T15:36:45.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.620 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1023280 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1023063 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1023063 ']' 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1023063 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1023063 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1023063' 00:23:15.879 killing process with pid 1023063 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1023063 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1023063 00:23:15.879 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1025075 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1025075 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025075 ']' 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.880 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.138 [2024-12-14 16:36:45.982854] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:16.138 [2024-12-14 16:36:45.982902] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.138 [2024-12-14 16:36:46.044005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.138 [2024-12-14 16:36:46.065187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.138 [2024-12-14 16:36:46.065222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.138 [2024-12-14 16:36:46.065229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.138 [2024-12-14 16:36:46.065235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.138 [2024-12-14 16:36:46.065240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.138 [2024-12-14 16:36:46.065788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.138 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.138 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:16.138 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:16.138 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.138 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.138 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.138 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.faDpddw6hK 00:23:16.138 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.faDpddw6hK 00:23:16.138 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:16.397 [2024-12-14 16:36:46.356423] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.397 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:16.656 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:16.656 [2024-12-14 16:36:46.737390] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.656 [2024-12-14 16:36:46.737585] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.914 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:16.914 malloc0 00:23:16.914 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:17.173 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.faDpddw6hK 00:23:17.432 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1025319 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1025319 /var/tmp/bdevperf.sock 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025319 ']' 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.691 [2024-12-14 16:36:47.546320] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:17.691 [2024-12-14 16:36:47.546366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025319 ] 00:23:17.691 [2024-12-14 16:36:47.624227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.691 [2024-12-14 16:36:47.647444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:17.691 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.faDpddw6hK 00:23:17.950 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:18.208 [2024-12-14 16:36:48.183523] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.208 nvme0n1 00:23:18.208 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.467 Running I/O for 1 seconds... 00:23:19.403 5393.00 IOPS, 21.07 MiB/s 00:23:19.403 Latency(us) 00:23:19.403 [2024-12-14T15:36:49.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.403 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.403 Verification LBA range: start 0x0 length 0x2000 00:23:19.403 nvme0n1 : 1.01 5447.87 21.28 0.00 0.00 23334.91 5523.75 28960.67 00:23:19.403 [2024-12-14T15:36:49.489Z] =================================================================================================================== 00:23:19.403 [2024-12-14T15:36:49.489Z] Total : 5447.87 21.28 0.00 0.00 23334.91 5523.75 28960.67 00:23:19.403 { 00:23:19.403 "results": [ 00:23:19.403 { 00:23:19.403 "job": "nvme0n1", 00:23:19.403 "core_mask": "0x2", 00:23:19.403 "workload": "verify", 00:23:19.403 "status": "finished", 00:23:19.403 "verify_range": { 00:23:19.403 "start": 0, 00:23:19.403 "length": 8192 00:23:19.403 }, 00:23:19.403 "queue_depth": 128, 00:23:19.403 "io_size": 4096, 00:23:19.403 "runtime": 1.013608, 00:23:19.403 "iops": 5447.865446997262, 00:23:19.403 "mibps": 21.280724402333053, 00:23:19.403 "io_failed": 0, 00:23:19.403 "io_timeout": 0, 00:23:19.403 "avg_latency_us": 23334.909386178228, 00:23:19.403 "min_latency_us": 5523.748571428571, 00:23:19.403 "max_latency_us": 28960.670476190477 00:23:19.403 } 00:23:19.403 ], 00:23:19.403 "core_count": 1 00:23:19.403 } 00:23:19.403 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1025319 00:23:19.403 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025319 ']' 00:23:19.403 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025319 00:23:19.403 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:19.403 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.403 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025319 00:23:19.403 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:19.403 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:19.403 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025319' 00:23:19.403 killing process with pid 1025319 00:23:19.403 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025319 00:23:19.403 Received shutdown signal, test time was about 1.000000 seconds 00:23:19.403 00:23:19.403 Latency(us) 00:23:19.403 [2024-12-14T15:36:49.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.403 [2024-12-14T15:36:49.490Z] =================================================================================================================== 00:23:19.404 [2024-12-14T15:36:49.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.404 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025319 00:23:19.663 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1025075 00:23:19.663 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025075 ']' 00:23:19.663 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025075 00:23:19.663 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:19.663 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.663 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025075 00:23:19.663 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.663 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.663 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025075' 00:23:19.663 killing process with pid 1025075 00:23:19.663 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025075 00:23:19.663 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025075 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1025774 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1025774 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025774 ']' 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.922 16:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.922 [2024-12-14 16:36:49.850505] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:19.922 [2024-12-14 16:36:49.850561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.922 [2024-12-14 16:36:49.927823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.922 [2024-12-14 16:36:49.948635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.922 [2024-12-14 16:36:49.948673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.922 [2024-12-14 16:36:49.948679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.922 [2024-12-14 16:36:49.948686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.922 [2024-12-14 16:36:49.948691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.923 [2024-12-14 16:36:49.949198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.181 [2024-12-14 16:36:50.084942] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.181 malloc0 00:23:20.181 [2024-12-14 16:36:50.112916] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:20.181 [2024-12-14 16:36:50.113103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1025803 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1025803 /var/tmp/bdevperf.sock 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025803 ']' 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.181 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.181 [2024-12-14 16:36:50.186812] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:20.181 [2024-12-14 16:36:50.186853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025803 ] 00:23:20.181 [2024-12-14 16:36:50.243743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.181 [2024-12-14 16:36:50.265841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.440 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.440 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.440 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.faDpddw6hK 00:23:20.699 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:20.699 [2024-12-14 16:36:50.728923] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.957 nvme0n1 00:23:20.957 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:20.957 Running I/O for 1 seconds... 00:23:21.893 5224.00 IOPS, 20.41 MiB/s 00:23:21.893 Latency(us) 00:23:21.893 [2024-12-14T15:36:51.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.894 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:21.894 Verification LBA range: start 0x0 length 0x2000 00:23:21.894 nvme0n1 : 1.02 5260.46 20.55 0.00 0.00 24150.47 6085.49 21720.50 00:23:21.894 [2024-12-14T15:36:51.980Z] =================================================================================================================== 00:23:21.894 [2024-12-14T15:36:51.980Z] Total : 5260.46 20.55 0.00 0.00 24150.47 6085.49 21720.50 00:23:21.894 { 00:23:21.894 "results": [ 00:23:21.894 { 00:23:21.894 "job": "nvme0n1", 00:23:21.894 "core_mask": "0x2", 00:23:21.894 "workload": "verify", 00:23:21.894 "status": "finished", 00:23:21.894 "verify_range": { 00:23:21.894 "start": 0, 00:23:21.894 "length": 8192 00:23:21.894 }, 00:23:21.894 "queue_depth": 128, 00:23:21.894 "io_size": 4096, 00:23:21.894 "runtime": 1.017402, 00:23:21.894 "iops": 5260.457518267116, 00:23:21.894 "mibps": 20.548662180730922, 00:23:21.894 "io_failed": 0, 00:23:21.894 "io_timeout": 0, 00:23:21.894 "avg_latency_us": 24150.46965335611, 00:23:21.894 "min_latency_us": 6085.4857142857145, 00:23:21.894 "max_latency_us": 21720.502857142856 00:23:21.894 } 00:23:21.894 ], 00:23:21.894 "core_count": 1 00:23:21.894 } 00:23:21.894 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:21.894 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.894 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.153 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.153 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:22.153 "subsystems": [ 00:23:22.153 { 00:23:22.153 "subsystem": "keyring", 00:23:22.153 "config": [ 00:23:22.153 { 00:23:22.153 "method": "keyring_file_add_key", 00:23:22.153 "params": { 00:23:22.153 "name": "key0", 00:23:22.153 "path": "/tmp/tmp.faDpddw6hK" 00:23:22.153 } 00:23:22.153 } 00:23:22.153 ] 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "subsystem": "iobuf", 00:23:22.153 "config": [ 00:23:22.153 { 00:23:22.153 "method": "iobuf_set_options", 00:23:22.153 "params": { 00:23:22.153 "small_pool_count": 8192, 00:23:22.153 "large_pool_count": 1024, 00:23:22.153 "small_bufsize": 8192, 00:23:22.153 "large_bufsize": 135168, 00:23:22.153 "enable_numa": false 00:23:22.153 } 00:23:22.153 } 00:23:22.153 ] 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "subsystem": "sock", 00:23:22.153 "config": [ 00:23:22.153 { 00:23:22.153 "method": "sock_set_default_impl", 00:23:22.153 "params": { 00:23:22.153 "impl_name": "posix" 00:23:22.153 } 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "method": "sock_impl_set_options", 00:23:22.153 "params": { 00:23:22.153 "impl_name": "ssl", 00:23:22.153 "recv_buf_size": 4096, 00:23:22.153 "send_buf_size": 4096, 00:23:22.153 "enable_recv_pipe": true, 00:23:22.153 "enable_quickack": false, 00:23:22.153 "enable_placement_id": 0, 00:23:22.153 "enable_zerocopy_send_server": true, 00:23:22.153 "enable_zerocopy_send_client": false, 00:23:22.153 "zerocopy_threshold": 0, 00:23:22.153 "tls_version": 0, 00:23:22.153 "enable_ktls": false 00:23:22.153 } 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "method": "sock_impl_set_options", 00:23:22.153 "params": { 00:23:22.153 "impl_name": "posix", 00:23:22.153 "recv_buf_size": 2097152, 00:23:22.153 "send_buf_size": 2097152, 00:23:22.153 "enable_recv_pipe": true, 00:23:22.153 "enable_quickack": false, 00:23:22.153 "enable_placement_id": 0, 00:23:22.153 "enable_zerocopy_send_server": true, 00:23:22.153 "enable_zerocopy_send_client": false, 00:23:22.153 "zerocopy_threshold": 0, 00:23:22.153 "tls_version": 0, 00:23:22.153 "enable_ktls": false 00:23:22.153 } 00:23:22.153 } 00:23:22.153 ] 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "subsystem": "vmd", 00:23:22.153 "config": [] 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "subsystem": "accel", 00:23:22.153 "config": [ 00:23:22.153 { 00:23:22.153 "method": "accel_set_options", 00:23:22.153 "params": { 00:23:22.153 "small_cache_size": 128, 00:23:22.153 "large_cache_size": 16, 00:23:22.153 "task_count": 2048, 00:23:22.153 "sequence_count": 2048, 00:23:22.153 "buf_count": 2048 00:23:22.153 } 00:23:22.153 } 00:23:22.153 ] 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "subsystem": "bdev", 00:23:22.153 "config": [ 00:23:22.153 { 00:23:22.153 "method": "bdev_set_options", 00:23:22.153 "params": { 00:23:22.153 "bdev_io_pool_size": 65535, 00:23:22.153 "bdev_io_cache_size": 256, 00:23:22.153 "bdev_auto_examine": true, 00:23:22.153 "iobuf_small_cache_size": 128, 00:23:22.153 "iobuf_large_cache_size": 16 00:23:22.153 } 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "method": "bdev_raid_set_options", 00:23:22.153 "params": { 00:23:22.153 "process_window_size_kb": 1024, 00:23:22.153 "process_max_bandwidth_mb_sec": 0 00:23:22.153 } 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "method": "bdev_iscsi_set_options", 00:23:22.153 "params": { 00:23:22.153 "timeout_sec": 30 00:23:22.153 } 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "method": "bdev_nvme_set_options", 00:23:22.153 "params": { 00:23:22.153 "action_on_timeout": "none", 00:23:22.153 "timeout_us": 0, 00:23:22.153 "timeout_admin_us": 0, 00:23:22.153 "keep_alive_timeout_ms": 10000, 00:23:22.153 "arbitration_burst": 0, 00:23:22.153 "low_priority_weight": 0, 00:23:22.153 "medium_priority_weight": 0, 00:23:22.153 "high_priority_weight": 0, 00:23:22.153 "nvme_adminq_poll_period_us": 10000, 00:23:22.153 "nvme_ioq_poll_period_us": 0, 00:23:22.153 "io_queue_requests": 0, 00:23:22.153 "delay_cmd_submit": true, 00:23:22.153 "transport_retry_count": 4, 00:23:22.153 "bdev_retry_count": 3, 00:23:22.153 "transport_ack_timeout": 0, 00:23:22.153 "ctrlr_loss_timeout_sec": 0, 00:23:22.153 "reconnect_delay_sec": 0, 00:23:22.153 "fast_io_fail_timeout_sec": 0, 00:23:22.153 "disable_auto_failback": false, 00:23:22.153 "generate_uuids": false, 00:23:22.153 "transport_tos": 0, 00:23:22.153 "nvme_error_stat": false, 00:23:22.153 "rdma_srq_size": 0, 00:23:22.153 "io_path_stat": false, 00:23:22.153 "allow_accel_sequence": false, 00:23:22.153 "rdma_max_cq_size": 0, 00:23:22.153 "rdma_cm_event_timeout_ms": 0, 00:23:22.153 "dhchap_digests": [ 00:23:22.153 "sha256", 00:23:22.153 "sha384", 00:23:22.153 "sha512" 00:23:22.153 ], 00:23:22.153 "dhchap_dhgroups": [ 00:23:22.153 "null", 00:23:22.153 "ffdhe2048", 00:23:22.153 "ffdhe3072", 00:23:22.153 "ffdhe4096", 00:23:22.153 "ffdhe6144", 00:23:22.153 "ffdhe8192" 00:23:22.153 ], 00:23:22.153 "rdma_umr_per_io": false 00:23:22.153 } 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "method": "bdev_nvme_set_hotplug", 00:23:22.153 "params": { 00:23:22.153 "period_us": 100000, 00:23:22.153 "enable": false 00:23:22.153 } 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "method": "bdev_malloc_create", 00:23:22.153 "params": { 00:23:22.153 "name": "malloc0", 00:23:22.153 "num_blocks": 8192, 00:23:22.153 "block_size": 4096, 00:23:22.153 "physical_block_size": 4096, 00:23:22.153 "uuid": "a08ddcb0-a536-4f35-aaf5-c61edb7e7f85", 00:23:22.153 "optimal_io_boundary": 0, 00:23:22.153 "md_size": 0, 00:23:22.153 "dif_type": 0, 00:23:22.153 "dif_is_head_of_md": false, 00:23:22.153 "dif_pi_format": 0 00:23:22.153 } 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "method": "bdev_wait_for_examine" 00:23:22.153 } 00:23:22.153 ] 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "subsystem": "nbd", 00:23:22.153 "config": [] 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "subsystem": "scheduler", 00:23:22.153 "config": [ 00:23:22.153 { 00:23:22.153 "method": "framework_set_scheduler", 00:23:22.153 "params": { 00:23:22.153 "name": "static" 00:23:22.153 } 00:23:22.153 } 00:23:22.153 ] 00:23:22.153 }, 00:23:22.153 { 00:23:22.153 "subsystem": "nvmf", 00:23:22.153 "config": [ 00:23:22.153 { 00:23:22.153 "method": "nvmf_set_config", 00:23:22.153 "params": { 00:23:22.153 "discovery_filter": "match_any", 00:23:22.153 "admin_cmd_passthru": { 00:23:22.153 "identify_ctrlr": false 00:23:22.153 }, 00:23:22.153 "dhchap_digests": [ 00:23:22.153 "sha256", 00:23:22.153 "sha384", 00:23:22.153 "sha512" 00:23:22.153 ], 00:23:22.153 "dhchap_dhgroups": [ 00:23:22.153 "null", 00:23:22.153 "ffdhe2048", 00:23:22.153 "ffdhe3072", 00:23:22.153 "ffdhe4096", 00:23:22.153 "ffdhe6144", 00:23:22.153 "ffdhe8192" 00:23:22.153 ] 00:23:22.154 } 00:23:22.154 }, 00:23:22.154 { 00:23:22.154 "method": "nvmf_set_max_subsystems", 00:23:22.154 "params": { 00:23:22.154 "max_subsystems": 1024 00:23:22.154 } 00:23:22.154 }, 00:23:22.154 { 00:23:22.154 "method": "nvmf_set_crdt", 00:23:22.154 "params": { 00:23:22.154 "crdt1": 0, 00:23:22.154 "crdt2": 0, 00:23:22.154 "crdt3": 0 00:23:22.154 } 00:23:22.154 }, 00:23:22.154 { 00:23:22.154 "method": "nvmf_create_transport", 00:23:22.154 "params": { 00:23:22.154 "trtype": "TCP", 00:23:22.154 "max_queue_depth": 128, 00:23:22.154 "max_io_qpairs_per_ctrlr": 127, 00:23:22.154 "in_capsule_data_size": 4096, 00:23:22.154 "max_io_size": 131072, 00:23:22.154 "io_unit_size": 131072, 00:23:22.154 "max_aq_depth": 128, 00:23:22.154 "num_shared_buffers": 511, 00:23:22.154 "buf_cache_size": 4294967295, 00:23:22.154 "dif_insert_or_strip": false, 00:23:22.154 "zcopy": false, 00:23:22.154 "c2h_success": false, 00:23:22.154 "sock_priority": 0, 00:23:22.154 "abort_timeout_sec": 1, 00:23:22.154 "ack_timeout": 0, 00:23:22.154 "data_wr_pool_size": 0 00:23:22.154 } 00:23:22.154 }, 00:23:22.154 { 00:23:22.154 "method": "nvmf_create_subsystem", 00:23:22.154 "params": { 00:23:22.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.154 "allow_any_host": false, 00:23:22.154 "serial_number": "00000000000000000000", 00:23:22.154 "model_number": "SPDK bdev Controller", 00:23:22.154 "max_namespaces": 32, 00:23:22.154 "min_cntlid": 1, 00:23:22.154 "max_cntlid": 65519, 00:23:22.154 "ana_reporting": false 00:23:22.154 } 00:23:22.154 }, 00:23:22.154 { 00:23:22.154 "method": "nvmf_subsystem_add_host", 00:23:22.154 "params": { 00:23:22.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.154 "host": "nqn.2016-06.io.spdk:host1", 00:23:22.154 "psk": "key0" 00:23:22.154 } 00:23:22.154 }, 00:23:22.154 { 00:23:22.154 "method": "nvmf_subsystem_add_ns", 00:23:22.154 "params": { 00:23:22.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.154 "namespace": { 00:23:22.154 "nsid": 1, 00:23:22.154 "bdev_name": "malloc0", 00:23:22.154 "nguid": "A08DDCB0A5364F35AAF5C61EDB7E7F85", 00:23:22.154 "uuid": "a08ddcb0-a536-4f35-aaf5-c61edb7e7f85", 00:23:22.154 "no_auto_visible": false 00:23:22.154 } 00:23:22.154 } 00:23:22.154 }, 00:23:22.154 { 00:23:22.154 "method": "nvmf_subsystem_add_listener", 00:23:22.154 "params": { 00:23:22.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.154 "listen_address": { 00:23:22.154 "trtype": "TCP", 00:23:22.154 "adrfam": "IPv4", 00:23:22.154 "traddr": "10.0.0.2", 00:23:22.154 "trsvcid": "4420" 00:23:22.154 }, 00:23:22.154 "secure_channel": false, 00:23:22.154 "sock_impl": "ssl" 00:23:22.154 } 00:23:22.154 } 00:23:22.154 ] 00:23:22.154 } 00:23:22.154 ] 00:23:22.154 }' 00:23:22.154 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:22.413 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:22.413 "subsystems": [ 00:23:22.413 { 00:23:22.413 "subsystem": "keyring", 00:23:22.413 "config": [ 00:23:22.413 { 00:23:22.413 "method": "keyring_file_add_key", 00:23:22.413 "params": { 00:23:22.413 "name": "key0", 00:23:22.413 "path": "/tmp/tmp.faDpddw6hK" 00:23:22.413 } 00:23:22.413 } 00:23:22.413 ] 00:23:22.413 }, 00:23:22.413 { 00:23:22.413 "subsystem": "iobuf", 00:23:22.413 "config": [ 00:23:22.413 { 00:23:22.413 "method": "iobuf_set_options", 00:23:22.413 "params": { 00:23:22.413 "small_pool_count": 8192, 00:23:22.413 "large_pool_count": 1024, 00:23:22.413 "small_bufsize": 8192, 00:23:22.413 "large_bufsize": 135168, 00:23:22.413 "enable_numa": false 00:23:22.413 } 00:23:22.413 } 00:23:22.413 ] 00:23:22.413 }, 00:23:22.413 { 00:23:22.413 "subsystem": "sock", 00:23:22.413 "config": [ 00:23:22.413 { 00:23:22.413 "method": "sock_set_default_impl", 00:23:22.413 "params": { 00:23:22.413 "impl_name": "posix" 00:23:22.413 } 00:23:22.413 }, 00:23:22.413 { 00:23:22.413 "method": "sock_impl_set_options", 00:23:22.413 "params": { 00:23:22.413 "impl_name": "ssl", 00:23:22.413 "recv_buf_size": 4096, 00:23:22.413 "send_buf_size": 4096, 00:23:22.413 "enable_recv_pipe": true, 00:23:22.413 "enable_quickack": false, 00:23:22.413 "enable_placement_id": 0, 00:23:22.413 "enable_zerocopy_send_server": true, 00:23:22.413 "enable_zerocopy_send_client": false, 00:23:22.413 "zerocopy_threshold": 0, 00:23:22.413 "tls_version": 0, 00:23:22.413 "enable_ktls": false 00:23:22.413 } 00:23:22.413 }, 00:23:22.413 { 00:23:22.413 "method": "sock_impl_set_options", 00:23:22.413 "params": { 00:23:22.413 "impl_name": "posix", 00:23:22.413 "recv_buf_size": 2097152, 00:23:22.413 "send_buf_size": 2097152, 00:23:22.413 "enable_recv_pipe": true, 00:23:22.413 "enable_quickack": false, 00:23:22.413 "enable_placement_id": 0, 00:23:22.413 "enable_zerocopy_send_server": true, 00:23:22.413 "enable_zerocopy_send_client": false, 00:23:22.413 "zerocopy_threshold": 0, 00:23:22.413 "tls_version": 0, 00:23:22.413 "enable_ktls": false 00:23:22.413 } 00:23:22.413 } 00:23:22.413 ] 00:23:22.413 }, 00:23:22.413 { 00:23:22.413 "subsystem": "vmd", 00:23:22.413 "config": [] 00:23:22.413 }, 00:23:22.413 { 00:23:22.413 "subsystem": "accel", 00:23:22.413 "config": [ 00:23:22.413 { 00:23:22.413 "method": "accel_set_options", 00:23:22.413 "params": { 00:23:22.413 "small_cache_size": 128, 00:23:22.413 "large_cache_size": 16, 00:23:22.413 "task_count": 2048, 00:23:22.413 "sequence_count": 2048, 00:23:22.413 "buf_count": 2048 00:23:22.413 } 00:23:22.413 } 00:23:22.413 ] 00:23:22.413 }, 00:23:22.413 { 00:23:22.413 "subsystem": "bdev", 00:23:22.413 "config": [ 00:23:22.413 { 00:23:22.413 "method": "bdev_set_options", 00:23:22.413 "params": { 00:23:22.413 "bdev_io_pool_size": 65535, 00:23:22.413 "bdev_io_cache_size": 256, 00:23:22.413 "bdev_auto_examine": true, 00:23:22.413 "iobuf_small_cache_size": 128, 00:23:22.413 "iobuf_large_cache_size": 16 00:23:22.413 } 00:23:22.413 }, 00:23:22.413 { 00:23:22.413 "method": "bdev_raid_set_options", 00:23:22.413 "params": { 00:23:22.413 "process_window_size_kb": 1024, 00:23:22.413 "process_max_bandwidth_mb_sec": 0 00:23:22.413 } 00:23:22.413 }, 00:23:22.413 { 00:23:22.413 "method": "bdev_iscsi_set_options", 00:23:22.413 "params": { 00:23:22.413 "timeout_sec": 30 00:23:22.413 } 00:23:22.414 }, 00:23:22.414 { 00:23:22.414 "method": "bdev_nvme_set_options", 00:23:22.414 "params": { 00:23:22.414 "action_on_timeout": "none", 00:23:22.414 "timeout_us": 0, 00:23:22.414 "timeout_admin_us": 0, 00:23:22.414 "keep_alive_timeout_ms": 10000, 00:23:22.414 "arbitration_burst": 0, 00:23:22.414 "low_priority_weight": 0, 00:23:22.414 "medium_priority_weight": 0, 00:23:22.414 "high_priority_weight": 0, 00:23:22.414 "nvme_adminq_poll_period_us": 10000, 00:23:22.414 "nvme_ioq_poll_period_us": 0, 00:23:22.414 "io_queue_requests": 512, 00:23:22.414 "delay_cmd_submit": true, 00:23:22.414 "transport_retry_count": 4, 00:23:22.414 "bdev_retry_count": 3, 00:23:22.414 "transport_ack_timeout": 0, 00:23:22.414 "ctrlr_loss_timeout_sec": 0, 00:23:22.414 "reconnect_delay_sec": 0, 00:23:22.414 "fast_io_fail_timeout_sec": 0, 00:23:22.414 "disable_auto_failback": false, 00:23:22.414 "generate_uuids": false, 00:23:22.414 "transport_tos": 0, 00:23:22.414 "nvme_error_stat": false, 00:23:22.414 "rdma_srq_size": 0, 00:23:22.414 "io_path_stat": false, 00:23:22.414 "allow_accel_sequence": false, 00:23:22.414 "rdma_max_cq_size": 0, 00:23:22.414 "rdma_cm_event_timeout_ms": 0, 00:23:22.414 "dhchap_digests": [ 00:23:22.414 "sha256", 00:23:22.414 "sha384", 00:23:22.414 "sha512" 00:23:22.414 ], 00:23:22.414 "dhchap_dhgroups": [ 00:23:22.414 "null", 00:23:22.414 "ffdhe2048", 00:23:22.414 "ffdhe3072", 00:23:22.414 "ffdhe4096", 00:23:22.414 "ffdhe6144", 00:23:22.414 "ffdhe8192" 00:23:22.414 ], 00:23:22.414 "rdma_umr_per_io": false 00:23:22.414 } 00:23:22.414 }, 00:23:22.414 { 00:23:22.414 "method": "bdev_nvme_attach_controller", 00:23:22.414 "params": { 00:23:22.414 "name": "nvme0", 00:23:22.414 "trtype": "TCP", 00:23:22.414 "adrfam": "IPv4", 00:23:22.414 "traddr": "10.0.0.2", 00:23:22.414 "trsvcid": "4420", 00:23:22.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.414 "prchk_reftag": false, 00:23:22.414 "prchk_guard": false, 00:23:22.414 "ctrlr_loss_timeout_sec": 0, 00:23:22.414 "reconnect_delay_sec": 0, 00:23:22.414 "fast_io_fail_timeout_sec": 0, 00:23:22.414 "psk": "key0", 00:23:22.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:22.414 "hdgst": false, 00:23:22.414 "ddgst": false, 00:23:22.414 "multipath": "multipath" 00:23:22.414 } 00:23:22.414 }, 00:23:22.414 { 00:23:22.414 "method": "bdev_nvme_set_hotplug", 00:23:22.414 "params": { 00:23:22.414 "period_us": 100000, 00:23:22.414 "enable": false 00:23:22.414 } 00:23:22.414 }, 00:23:22.414 { 00:23:22.414 "method": "bdev_enable_histogram", 00:23:22.414 "params": { 00:23:22.414 "name": "nvme0n1", 00:23:22.414 "enable": true 00:23:22.414 } 00:23:22.414 }, 00:23:22.414 { 00:23:22.414 "method": "bdev_wait_for_examine" 00:23:22.414 } 00:23:22.414 ] 00:23:22.414 }, 00:23:22.414 { 00:23:22.414 "subsystem": "nbd", 00:23:22.414 "config": [] 00:23:22.414 } 00:23:22.414 ] 00:23:22.414 }' 00:23:22.414 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1025803 00:23:22.414 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025803 ']' 00:23:22.414 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025803 00:23:22.414 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:22.414 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.414 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025803 00:23:22.414 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:22.414 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:22.414 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025803' 00:23:22.414 killing process with pid 1025803 00:23:22.414 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025803 00:23:22.414 Received shutdown signal, test time was about 1.000000 seconds 00:23:22.414 00:23:22.414 Latency(us) 00:23:22.414 [2024-12-14T15:36:52.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.414 [2024-12-14T15:36:52.500Z] =================================================================================================================== 00:23:22.414 [2024-12-14T15:36:52.500Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.414 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025803 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1025774 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025774 ']' 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025774 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025774 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025774' 00:23:22.674 killing process with pid 1025774 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025774 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025774 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.674 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:22.674 "subsystems": [ 00:23:22.674 { 00:23:22.674 "subsystem": "keyring", 00:23:22.674 "config": [ 00:23:22.674 { 00:23:22.674 "method": "keyring_file_add_key", 00:23:22.674 "params": { 00:23:22.674 "name": "key0", 00:23:22.674 "path": "/tmp/tmp.faDpddw6hK" 00:23:22.674 } 00:23:22.674 } 00:23:22.674 ] 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "subsystem": "iobuf", 00:23:22.674 "config": [ 00:23:22.674 { 00:23:22.674 "method": "iobuf_set_options", 00:23:22.674 "params": { 00:23:22.674 "small_pool_count": 8192, 00:23:22.674 "large_pool_count": 1024, 00:23:22.674 "small_bufsize": 8192, 00:23:22.674 "large_bufsize": 135168, 00:23:22.674 "enable_numa": false 00:23:22.674 } 00:23:22.674 } 00:23:22.674 ] 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "subsystem": "sock", 00:23:22.674 "config": [ 00:23:22.674 { 00:23:22.674 "method": "sock_set_default_impl", 00:23:22.674 "params": { 00:23:22.674 "impl_name": "posix" 00:23:22.674 } 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "method": "sock_impl_set_options", 00:23:22.674 "params": { 00:23:22.674 "impl_name": "ssl", 00:23:22.674 "recv_buf_size": 4096, 00:23:22.674 "send_buf_size": 4096, 00:23:22.674 "enable_recv_pipe": true, 00:23:22.674 "enable_quickack": false, 00:23:22.674 "enable_placement_id": 0, 00:23:22.674 "enable_zerocopy_send_server": true, 00:23:22.674 "enable_zerocopy_send_client": false, 00:23:22.674 "zerocopy_threshold": 0, 00:23:22.674 "tls_version": 0, 00:23:22.674 "enable_ktls": false 00:23:22.674 } 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "method": "sock_impl_set_options", 00:23:22.674 "params": { 00:23:22.674 "impl_name": "posix", 00:23:22.674 "recv_buf_size": 2097152, 00:23:22.674 "send_buf_size": 2097152, 00:23:22.674 "enable_recv_pipe": true, 00:23:22.674 "enable_quickack": false, 00:23:22.674 "enable_placement_id": 0, 00:23:22.674 "enable_zerocopy_send_server": true, 00:23:22.674 "enable_zerocopy_send_client": false, 00:23:22.674 "zerocopy_threshold": 0, 00:23:22.674 "tls_version": 0, 00:23:22.674 "enable_ktls": false 00:23:22.674 } 00:23:22.674 } 00:23:22.674 ] 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "subsystem": "vmd", 00:23:22.674 "config": [] 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "subsystem": "accel", 00:23:22.674 "config": [ 00:23:22.674 { 00:23:22.674 "method": "accel_set_options", 00:23:22.674 "params": { 00:23:22.674 "small_cache_size": 128, 00:23:22.674 "large_cache_size": 16, 00:23:22.674 "task_count": 2048, 00:23:22.674 "sequence_count": 2048, 00:23:22.674 "buf_count": 2048 00:23:22.674 } 00:23:22.674 } 00:23:22.674 ] 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "subsystem": "bdev", 00:23:22.674 "config": [ 00:23:22.674 { 00:23:22.674 "method": "bdev_set_options", 00:23:22.674 "params": { 00:23:22.674 "bdev_io_pool_size": 65535, 00:23:22.674 "bdev_io_cache_size": 256, 00:23:22.674 "bdev_auto_examine": true, 00:23:22.674 "iobuf_small_cache_size": 128, 00:23:22.674 "iobuf_large_cache_size": 16 00:23:22.674 } 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "method": "bdev_raid_set_options", 00:23:22.674 "params": { 00:23:22.674 "process_window_size_kb": 1024, 00:23:22.674 "process_max_bandwidth_mb_sec": 0 00:23:22.674 } 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "method": "bdev_iscsi_set_options", 00:23:22.674 "params": { 00:23:22.674 "timeout_sec": 30 00:23:22.674 } 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "method": "bdev_nvme_set_options", 00:23:22.674 "params": { 00:23:22.674 "action_on_timeout": "none", 00:23:22.674 "timeout_us": 0, 00:23:22.674 "timeout_admin_us": 0, 00:23:22.674 "keep_alive_timeout_ms": 10000, 00:23:22.674 "arbitration_burst": 0, 00:23:22.674 "low_priority_weight": 0, 00:23:22.674 "medium_priority_weight": 0, 00:23:22.674 "high_priority_weight": 0, 00:23:22.674 "nvme_adminq_poll_period_us": 10000, 00:23:22.674 "nvme_ioq_poll_period_us": 0, 00:23:22.674 "io_queue_requests": 0, 00:23:22.674 "delay_cmd_submit": true, 00:23:22.674 "transport_retry_count": 4, 00:23:22.674 "bdev_retry_count": 3, 00:23:22.674 "transport_ack_timeout": 0, 00:23:22.674 "ctrlr_loss_timeout_sec": 0, 00:23:22.674 "reconnect_delay_sec": 0, 00:23:22.674 "fast_io_fail_timeout_sec": 0, 00:23:22.674 "disable_auto_failback": false, 00:23:22.674 "generate_uuids": false, 00:23:22.674 "transport_tos": 0, 00:23:22.674 "nvme_error_stat": false, 00:23:22.674 "rdma_srq_size": 0, 00:23:22.674 "io_path_stat": false, 00:23:22.674 "allow_accel_sequence": false, 00:23:22.674 "rdma_max_cq_size": 0, 00:23:22.674 "rdma_cm_event_timeout_ms": 0, 00:23:22.674 "dhchap_digests": [ 00:23:22.674 "sha256", 00:23:22.674 "sha384", 00:23:22.674 "sha512" 00:23:22.674 ], 00:23:22.674 "dhchap_dhgroups": [ 00:23:22.674 "null", 00:23:22.674 "ffdhe2048", 00:23:22.674 "ffdhe3072", 00:23:22.674 "ffdhe4096", 00:23:22.674 "ffdhe6144", 00:23:22.674 "ffdhe8192" 00:23:22.674 ], 00:23:22.674 "rdma_umr_per_io": false 00:23:22.674 } 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "method": "bdev_nvme_set_hotplug", 00:23:22.674 "params": { 00:23:22.674 "period_us": 100000, 00:23:22.674 "enable": false 00:23:22.674 } 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "method": "bdev_malloc_create", 00:23:22.674 "params": { 00:23:22.674 "name": "malloc0", 00:23:22.674 "num_blocks": 8192, 00:23:22.674 "block_size": 4096, 00:23:22.674 "physical_block_size": 4096, 00:23:22.674 "uuid": "a08ddcb0-a536-4f35-aaf5-c61edb7e7f85", 00:23:22.674 "optimal_io_boundary": 0, 00:23:22.674 "md_size": 0, 00:23:22.674 "dif_type": 0, 00:23:22.674 "dif_is_head_of_md": false, 00:23:22.674 "dif_pi_format": 0 00:23:22.674 } 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "method": "bdev_wait_for_examine" 00:23:22.674 } 00:23:22.674 ] 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "subsystem": "nbd", 00:23:22.674 "config": [] 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "subsystem": "scheduler", 00:23:22.674 "config": [ 00:23:22.674 { 00:23:22.674 "method": "framework_set_scheduler", 00:23:22.674 "params": { 00:23:22.674 "name": "static" 00:23:22.674 } 00:23:22.674 } 00:23:22.674 ] 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "subsystem": "nvmf", 00:23:22.674 "config": [ 00:23:22.674 { 00:23:22.674 "method": "nvmf_set_config", 00:23:22.674 "params": { 00:23:22.674 "discovery_filter": "match_any", 00:23:22.674 "admin_cmd_passthru": { 00:23:22.674 "identify_ctrlr": false 00:23:22.674 }, 00:23:22.674 "dhchap_digests": [ 00:23:22.674 "sha256", 00:23:22.674 "sha384", 00:23:22.674 "sha512" 00:23:22.674 ], 00:23:22.674 "dhchap_dhgroups": [ 00:23:22.674 "null", 00:23:22.674 "ffdhe2048", 00:23:22.674 "ffdhe3072", 00:23:22.674 "ffdhe4096", 00:23:22.674 "ffdhe6144", 00:23:22.674 "ffdhe8192" 00:23:22.674 ] 00:23:22.674 } 00:23:22.674 }, 00:23:22.674 { 00:23:22.674 "method": "nvmf_set_max_subsystems", 00:23:22.674 "params": { 00:23:22.675 "max_subsystems": 1024 00:23:22.675 } 00:23:22.675 }, 00:23:22.675 { 00:23:22.675 "method": "nvmf_set_crdt", 00:23:22.675 "params": { 00:23:22.675 "crdt1": 0, 00:23:22.675 "crdt2": 0, 00:23:22.675 "crdt3": 0 00:23:22.675 } 00:23:22.675 }, 00:23:22.675 { 00:23:22.675 "method": "nvmf_create_transport", 00:23:22.675 "params": { 00:23:22.675 "trtype": "TCP", 00:23:22.675 "max_queue_depth": 128, 00:23:22.675 "max_io_qpairs_per_ctrlr": 127, 00:23:22.675 "in_capsule_data_size": 4096, 00:23:22.675 "max_io_size": 131072, 00:23:22.675 "io_unit_size": 131072, 00:23:22.675 "max_aq_depth": 128, 00:23:22.675 "num_shared_buffers": 511, 00:23:22.675 "buf_cache_size": 4294967295, 00:23:22.675 "dif_insert_or_strip": false, 00:23:22.675 "zcopy": false, 00:23:22.675 "c2h_success": false, 00:23:22.675 "sock_priority": 0, 00:23:22.675 "abort_timeout_sec": 1, 00:23:22.675 "ack_timeout": 0, 00:23:22.675 "data_wr_pool_size": 0 00:23:22.675 } 00:23:22.675 }, 00:23:22.675 { 00:23:22.675 "method": "nvmf_create_subsystem", 00:23:22.675 "params": { 00:23:22.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.675 "allow_any_host": false, 00:23:22.675 "serial_number": "00000000000000000000", 00:23:22.675 "model_number": "SPDK bdev Controller", 00:23:22.675 "max_namespaces": 32, 00:23:22.675 "min_cntlid": 1, 00:23:22.675 "max_cntlid": 65519, 00:23:22.675 "ana_reporting": false 00:23:22.675 } 00:23:22.675 }, 00:23:22.675 { 00:23:22.675 "method": "nvmf_subsystem_add_host", 00:23:22.675 "params": { 00:23:22.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.675 "host": "nqn.2016-06.io.spdk:host1", 00:23:22.675 "psk": "key0" 00:23:22.675 } 00:23:22.675 }, 00:23:22.675 { 00:23:22.675 "method": "nvmf_subsystem_add_ns", 00:23:22.675 "params": { 00:23:22.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.675 "namespace": { 00:23:22.675 "nsid": 1, 00:23:22.675 "bdev_name": "malloc0", 00:23:22.675 "nguid": "A08DDCB0A5364F35AAF5C61EDB7E7F85", 00:23:22.675 "uuid": "a08ddcb0-a536-4f35-aaf5-c61edb7e7f85", 00:23:22.675 "no_auto_visible": false 00:23:22.675 } 00:23:22.675 } 00:23:22.675 }, 00:23:22.675 { 00:23:22.675 "method": "nvmf_subsystem_add_listener", 00:23:22.675 "params": { 00:23:22.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.675 "listen_address": { 00:23:22.675 "trtype": "TCP", 00:23:22.675 "adrfam": "IPv4", 00:23:22.675 "traddr": "10.0.0.2", 00:23:22.675 "trsvcid": "4420" 00:23:22.675 }, 00:23:22.675 "secure_channel": false, 00:23:22.675 "sock_impl": "ssl" 00:23:22.675 } 00:23:22.675 } 00:23:22.675 ] 00:23:22.675 } 00:23:22.675 ] 00:23:22.675 }' 00:23:22.675 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.675 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:22.675 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1026260 00:23:22.675 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1026260 00:23:22.675 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1026260 ']' 00:23:22.675 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.675 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.675 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.675 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.675 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.934 [2024-12-14 16:36:52.790537] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:22.934 [2024-12-14 16:36:52.790588] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.934 [2024-12-14 16:36:52.869041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.934 [2024-12-14 16:36:52.890140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.934 [2024-12-14 16:36:52.890179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.934 [2024-12-14 16:36:52.890186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.934 [2024-12-14 16:36:52.890192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.934 [2024-12-14 16:36:52.890197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.934 [2024-12-14 16:36:52.890764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.193 [2024-12-14 16:36:53.098980] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.193 [2024-12-14 16:36:53.131003] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.193 [2024-12-14 16:36:53.131197] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1026495 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1026495 /var/tmp/bdevperf.sock 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1026495 ']' 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.761 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:23.761 "subsystems": [ 00:23:23.761 { 00:23:23.761 "subsystem": "keyring", 00:23:23.761 "config": [ 00:23:23.761 { 00:23:23.761 "method": "keyring_file_add_key", 00:23:23.761 "params": { 00:23:23.761 "name": "key0", 00:23:23.761 "path": "/tmp/tmp.faDpddw6hK" 00:23:23.761 } 00:23:23.761 } 00:23:23.761 ] 00:23:23.761 }, 00:23:23.761 { 00:23:23.761 "subsystem": "iobuf", 00:23:23.761 "config": [ 00:23:23.761 { 00:23:23.761 "method": "iobuf_set_options", 00:23:23.761 "params": { 00:23:23.761 "small_pool_count": 8192, 00:23:23.761 "large_pool_count": 1024, 00:23:23.761 "small_bufsize": 8192, 00:23:23.761 "large_bufsize": 135168, 00:23:23.761 "enable_numa": false 00:23:23.761 } 00:23:23.761 } 00:23:23.761 ] 00:23:23.761 }, 00:23:23.761 { 00:23:23.761 "subsystem": "sock", 00:23:23.761 "config": [ 00:23:23.761 { 00:23:23.761 "method": "sock_set_default_impl", 00:23:23.761 "params": { 00:23:23.761 "impl_name": "posix" 00:23:23.761 } 00:23:23.761 }, 00:23:23.761 { 00:23:23.761 "method": "sock_impl_set_options", 00:23:23.761 "params": { 00:23:23.761 "impl_name": "ssl", 00:23:23.761 "recv_buf_size": 4096, 00:23:23.761 "send_buf_size": 4096, 00:23:23.761 "enable_recv_pipe": true, 00:23:23.761 "enable_quickack": false, 00:23:23.761 "enable_placement_id": 0, 00:23:23.761 "enable_zerocopy_send_server": true, 00:23:23.761 "enable_zerocopy_send_client": false, 00:23:23.761 "zerocopy_threshold": 0, 00:23:23.761 "tls_version": 0, 00:23:23.761 "enable_ktls": false 00:23:23.761 } 00:23:23.761 }, 00:23:23.761 { 00:23:23.761 "method": "sock_impl_set_options", 00:23:23.761 "params": { 00:23:23.761 "impl_name": "posix", 00:23:23.761 "recv_buf_size": 2097152, 00:23:23.761 "send_buf_size": 2097152, 00:23:23.761 "enable_recv_pipe": true, 00:23:23.761 "enable_quickack": false, 00:23:23.761 "enable_placement_id": 0, 00:23:23.761 "enable_zerocopy_send_server": true, 00:23:23.761 "enable_zerocopy_send_client": false, 00:23:23.761 "zerocopy_threshold": 0, 00:23:23.761 "tls_version": 0, 00:23:23.761 "enable_ktls": false 00:23:23.761 } 00:23:23.761 } 00:23:23.761 ] 00:23:23.761 }, 00:23:23.761 { 00:23:23.761 "subsystem": "vmd", 00:23:23.761 "config": [] 00:23:23.761 }, 00:23:23.761 { 00:23:23.761 "subsystem": "accel", 00:23:23.761 "config": [ 00:23:23.761 { 00:23:23.761 "method": "accel_set_options", 00:23:23.761 "params": { 00:23:23.761 "small_cache_size": 128, 00:23:23.761 "large_cache_size": 16, 00:23:23.761 "task_count": 2048, 00:23:23.761 "sequence_count": 2048, 00:23:23.761 "buf_count": 2048 00:23:23.761 } 00:23:23.761 } 00:23:23.761 ] 00:23:23.761 }, 00:23:23.761 { 00:23:23.761 "subsystem": "bdev", 00:23:23.761 "config": [ 00:23:23.761 { 00:23:23.761 "method": "bdev_set_options", 00:23:23.761 "params": { 00:23:23.761 "bdev_io_pool_size": 65535, 00:23:23.761 "bdev_io_cache_size": 256, 00:23:23.761 "bdev_auto_examine": true, 00:23:23.761 "iobuf_small_cache_size": 128, 00:23:23.761 "iobuf_large_cache_size": 16 00:23:23.761 } 00:23:23.761 }, 00:23:23.761 { 00:23:23.761 "method": "bdev_raid_set_options", 00:23:23.761 "params": { 00:23:23.761 "process_window_size_kb": 1024, 00:23:23.761 "process_max_bandwidth_mb_sec": 0 00:23:23.761 } 00:23:23.761 }, 00:23:23.761 { 00:23:23.761 "method": "bdev_iscsi_set_options", 00:23:23.761 "params": { 00:23:23.761 "timeout_sec": 30 00:23:23.761 } 00:23:23.761 }, 00:23:23.761 { 00:23:23.761 "method": "bdev_nvme_set_options", 00:23:23.761 "params": { 00:23:23.761 "action_on_timeout": "none", 00:23:23.761 "timeout_us": 0, 00:23:23.761 "timeout_admin_us": 0, 00:23:23.761 "keep_alive_timeout_ms": 10000, 00:23:23.761 "arbitration_burst": 0, 00:23:23.761 "low_priority_weight": 0, 00:23:23.761 "medium_priority_weight": 0, 00:23:23.761 "high_priority_weight": 0, 00:23:23.761 "nvme_adminq_poll_period_us": 10000, 00:23:23.761 "nvme_ioq_poll_period_us": 0, 00:23:23.761 "io_queue_requests": 512, 00:23:23.761 "delay_cmd_submit": true, 00:23:23.761 "transport_retry_count": 4, 00:23:23.761 "bdev_retry_count": 3, 00:23:23.761 "transport_ack_timeout": 0, 00:23:23.761 "ctrlr_loss_timeout_sec": 0, 00:23:23.761 "reconnect_delay_sec": 0, 00:23:23.761 "fast_io_fail_timeout_sec": 0, 00:23:23.761 "disable_auto_failback": false, 00:23:23.761 "generate_uuids": false, 00:23:23.761 "transport_tos": 0, 00:23:23.761 "nvme_error_stat": false, 00:23:23.761 "rdma_srq_size": 0, 00:23:23.761 "io_path_stat": false, 00:23:23.761 "allow_accel_sequence": false, 00:23:23.761 "rdma_max_cq_size": 0, 00:23:23.761 "rdma_cm_event_timeout_ms": 0, 00:23:23.761 "dhchap_digests": [ 00:23:23.761 "sha256", 00:23:23.761 "sha384", 00:23:23.761 "sha512" 00:23:23.761 ], 00:23:23.761 "dhchap_dhgroups": [ 00:23:23.761 "null", 00:23:23.761 "ffdhe2048", 00:23:23.761 "ffdhe3072", 00:23:23.761 "ffdhe4096", 00:23:23.761 "ffdhe6144", 00:23:23.761 "ffdhe8192" 00:23:23.761 ], 00:23:23.761 "rdma_umr_per_io": false 00:23:23.761 } 00:23:23.761 }, 00:23:23.761 { 00:23:23.761 "method": "bdev_nvme_attach_controller", 00:23:23.761 "params": { 00:23:23.761 "name": "nvme0", 00:23:23.761 "trtype": "TCP", 00:23:23.761 "adrfam": "IPv4", 00:23:23.761 "traddr": "10.0.0.2", 00:23:23.761 "trsvcid": "4420", 00:23:23.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.761 "prchk_reftag": false, 00:23:23.761 "prchk_guard": false, 00:23:23.762 "ctrlr_loss_timeout_sec": 0, 00:23:23.762 "reconnect_delay_sec": 0, 00:23:23.762 "fast_io_fail_timeout_sec": 0, 00:23:23.762 "psk": "key0", 00:23:23.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.762 "hdgst": false, 00:23:23.762 "ddgst": false, 00:23:23.762 "multipath": "multipath" 00:23:23.762 } 00:23:23.762 }, 00:23:23.762 { 00:23:23.762 "method": "bdev_nvme_set_hotplug", 00:23:23.762 "params": { 00:23:23.762 "period_us": 100000, 00:23:23.762 "enable": false 00:23:23.762 } 00:23:23.762 }, 00:23:23.762 { 00:23:23.762 "method": "bdev_enable_histogram", 00:23:23.762 "params": { 00:23:23.762 "name": "nvme0n1", 00:23:23.762 "enable": true 00:23:23.762 } 00:23:23.762 }, 00:23:23.762 { 00:23:23.762 "method": "bdev_wait_for_examine" 00:23:23.762 } 00:23:23.762 ] 00:23:23.762 }, 00:23:23.762 { 00:23:23.762 "subsystem": "nbd", 00:23:23.762 "config": [] 00:23:23.762 } 00:23:23.762 ] 00:23:23.762 }' 00:23:23.762 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.762 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.762 [2024-12-14 16:36:53.704820] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:23.762 [2024-12-14 16:36:53.704869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026495 ] 00:23:23.762 [2024-12-14 16:36:53.781311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.762 [2024-12-14 16:36:53.803237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.021 [2024-12-14 16:36:53.950661] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.588 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.588 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:24.588 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.588 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:24.846 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.846 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:24.846 Running I/O for 1 seconds... 00:23:25.790 5412.00 IOPS, 21.14 MiB/s 00:23:25.790 Latency(us) 00:23:25.790 [2024-12-14T15:36:55.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.790 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:25.790 Verification LBA range: start 0x0 length 0x2000 00:23:25.790 nvme0n1 : 1.02 5453.47 21.30 0.00 0.00 23302.07 6834.47 20846.69 00:23:25.790 [2024-12-14T15:36:55.876Z] =================================================================================================================== 00:23:25.790 [2024-12-14T15:36:55.876Z] Total : 5453.47 21.30 0.00 0.00 23302.07 6834.47 20846.69 00:23:25.790 { 00:23:25.790 "results": [ 00:23:25.790 { 00:23:25.790 "job": "nvme0n1", 00:23:25.790 "core_mask": "0x2", 00:23:25.790 "workload": "verify", 00:23:25.790 "status": "finished", 00:23:25.790 "verify_range": { 00:23:25.790 "start": 0, 00:23:25.790 "length": 8192 00:23:25.790 }, 00:23:25.790 "queue_depth": 128, 00:23:25.790 "io_size": 4096, 00:23:25.790 "runtime": 1.01605, 00:23:25.790 "iops": 5453.471777963683, 00:23:25.790 "mibps": 21.302624132670637, 00:23:25.790 "io_failed": 0, 00:23:25.790 "io_timeout": 0, 00:23:25.790 "avg_latency_us": 23302.071873565885, 00:23:25.790 "min_latency_us": 6834.4685714285715, 00:23:25.790 "max_latency_us": 20846.689523809524 00:23:25.790 } 00:23:25.790 ], 00:23:25.790 "core_count": 1 00:23:25.790 } 00:23:25.790 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:26.050 nvmf_trace.0 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1026495 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1026495 ']' 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1026495 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.050 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1026495 00:23:26.050 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:26.050 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:26.050 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1026495' 00:23:26.050 killing process with pid 1026495 00:23:26.050 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1026495 00:23:26.050 Received shutdown signal, test time was about 1.000000 seconds 00:23:26.050 00:23:26.050 Latency(us) 00:23:26.050 [2024-12-14T15:36:56.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.050 [2024-12-14T15:36:56.136Z] =================================================================================================================== 00:23:26.050 [2024-12-14T15:36:56.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.050 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1026495 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.309 rmmod nvme_tcp 00:23:26.309 rmmod nvme_fabrics 00:23:26.309 rmmod nvme_keyring 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1026260 ']' 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1026260 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1026260 ']' 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1026260 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1026260 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1026260' 00:23:26.309 killing process with pid 1026260 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1026260 00:23:26.309 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1026260 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.568 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.472 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:28.472 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Eb0yXY8bMn /tmp/tmp.eHbvH78FW4 /tmp/tmp.faDpddw6hK 00:23:28.472 00:23:28.472 real 1m18.593s 00:23:28.472 user 2m0.228s 00:23:28.472 sys 0m30.646s 00:23:28.472 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.472 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.472 ************************************ 00:23:28.472 END TEST nvmf_tls 00:23:28.472 ************************************ 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:28.732 ************************************ 00:23:28.732 START TEST nvmf_fips 00:23:28.732 ************************************ 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:28.732 * Looking for test storage... 00:23:28.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:28.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.732 --rc genhtml_branch_coverage=1 00:23:28.732 --rc genhtml_function_coverage=1 00:23:28.732 --rc genhtml_legend=1 00:23:28.732 --rc geninfo_all_blocks=1 00:23:28.732 --rc geninfo_unexecuted_blocks=1 00:23:28.732 00:23:28.732 ' 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:28.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.732 --rc genhtml_branch_coverage=1 00:23:28.732 --rc genhtml_function_coverage=1 00:23:28.732 --rc genhtml_legend=1 00:23:28.732 --rc geninfo_all_blocks=1 00:23:28.732 --rc geninfo_unexecuted_blocks=1 00:23:28.732 00:23:28.732 ' 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:28.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.732 --rc genhtml_branch_coverage=1 00:23:28.732 --rc genhtml_function_coverage=1 00:23:28.732 --rc genhtml_legend=1 00:23:28.732 --rc geninfo_all_blocks=1 00:23:28.732 --rc geninfo_unexecuted_blocks=1 00:23:28.732 00:23:28.732 ' 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:28.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.732 --rc genhtml_branch_coverage=1 00:23:28.732 --rc genhtml_function_coverage=1 00:23:28.732 --rc genhtml_legend=1 00:23:28.732 --rc geninfo_all_blocks=1 00:23:28.732 --rc geninfo_unexecuted_blocks=1 00:23:28.732 00:23:28.732 ' 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:28.732 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:28.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:28.733 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:28.993 Error setting digest 00:23:28.993 40C2B94B857F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:28.993 40C2B94B857F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:28.993 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:35.561 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:35.561 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:35.561 Found net devices under 0000:af:00.0: cvl_0_0 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:35.561 Found net devices under 0000:af:00.1: cvl_0_1 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:35.561 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:35.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:23:35.562 00:23:35.562 --- 10.0.0.2 ping statistics --- 00:23:35.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.562 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:23:35.562 00:23:35.562 --- 10.0.0.1 ping statistics --- 00:23:35.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.562 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1030437 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1030437 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1030437 ']' 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.562 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:35.562 [2024-12-14 16:37:05.016927] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:35.562 [2024-12-14 16:37:05.016977] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.562 [2024-12-14 16:37:05.096974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.562 [2024-12-14 16:37:05.117863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.562 [2024-12-14 16:37:05.117900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.562 [2024-12-14 16:37:05.117907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.562 [2024-12-14 16:37:05.117913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.562 [2024-12-14 16:37:05.117918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.562 [2024-12-14 16:37:05.118404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.EQt 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.EQt 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.EQt 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.EQt 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:35.562 [2024-12-14 16:37:05.437497] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.562 [2024-12-14 16:37:05.453510] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.562 [2024-12-14 16:37:05.453696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.562 malloc0 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1030473 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1030473 /var/tmp/bdevperf.sock 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1030473 ']' 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.562 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:35.562 [2024-12-14 16:37:05.584732] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:35.562 [2024-12-14 16:37:05.584786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030473 ] 00:23:35.820 [2024-12-14 16:37:05.655438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.820 [2024-12-14 16:37:05.677461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.820 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.820 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:35.820 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.EQt 00:23:36.079 16:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.079 [2024-12-14 16:37:06.160383] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.337 TLSTESTn1 00:23:36.337 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:36.337 Running I/O for 10 seconds... 00:23:38.648 5389.00 IOPS, 21.05 MiB/s [2024-12-14T15:37:09.669Z] 5457.50 IOPS, 21.32 MiB/s [2024-12-14T15:37:10.603Z] 5525.67 IOPS, 21.58 MiB/s [2024-12-14T15:37:11.538Z] 5531.50 IOPS, 21.61 MiB/s [2024-12-14T15:37:12.473Z] 5542.60 IOPS, 21.65 MiB/s [2024-12-14T15:37:13.408Z] 5558.67 IOPS, 21.71 MiB/s [2024-12-14T15:37:14.783Z] 5573.43 IOPS, 21.77 MiB/s [2024-12-14T15:37:15.717Z] 5579.50 IOPS, 21.79 MiB/s [2024-12-14T15:37:16.652Z] 5566.33 IOPS, 21.74 MiB/s [2024-12-14T15:37:16.652Z] 5564.30 IOPS, 21.74 MiB/s 00:23:46.566 Latency(us) 00:23:46.566 [2024-12-14T15:37:16.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.566 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:46.566 Verification LBA range: start 0x0 length 0x2000 00:23:46.566 TLSTESTn1 : 10.02 5567.08 21.75 0.00 0.00 22954.61 6241.52 30833.13 00:23:46.566 [2024-12-14T15:37:16.652Z] =================================================================================================================== 00:23:46.566 [2024-12-14T15:37:16.652Z] Total : 5567.08 21.75 0.00 0.00 22954.61 6241.52 30833.13 00:23:46.566 { 00:23:46.566 "results": [ 00:23:46.566 { 00:23:46.566 "job": "TLSTESTn1", 00:23:46.566 "core_mask": "0x4", 00:23:46.566 "workload": "verify", 00:23:46.566 "status": "finished", 00:23:46.566 "verify_range": { 00:23:46.566 "start": 0, 00:23:46.566 "length": 8192 00:23:46.566 }, 00:23:46.566 "queue_depth": 128, 00:23:46.566 "io_size": 4096, 00:23:46.566 "runtime": 10.018002, 00:23:46.567 "iops": 5567.078145921711, 00:23:46.567 "mibps": 21.746399007506685, 00:23:46.567 "io_failed": 0, 00:23:46.567 "io_timeout": 0, 00:23:46.567 "avg_latency_us": 22954.61279958606, 00:23:46.567 "min_latency_us": 6241.523809523809, 00:23:46.567 "max_latency_us": 30833.12761904762 00:23:46.567 } 00:23:46.567 ], 00:23:46.567 "core_count": 1 00:23:46.567 } 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:46.567 nvmf_trace.0 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1030473 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1030473 ']' 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1030473 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1030473 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1030473' 00:23:46.567 killing process with pid 1030473 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1030473 00:23:46.567 Received shutdown signal, test time was about 10.000000 seconds 00:23:46.567 00:23:46.567 Latency(us) 00:23:46.567 [2024-12-14T15:37:16.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.567 [2024-12-14T15:37:16.653Z] =================================================================================================================== 00:23:46.567 [2024-12-14T15:37:16.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.567 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1030473 00:23:46.825 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:46.826 rmmod nvme_tcp 00:23:46.826 rmmod nvme_fabrics 00:23:46.826 rmmod nvme_keyring 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1030437 ']' 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1030437 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1030437 ']' 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1030437 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1030437 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1030437' 00:23:46.826 killing process with pid 1030437 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1030437 00:23:46.826 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1030437 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.085 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.988 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:48.988 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.EQt 00:23:48.988 00:23:48.988 real 0m20.461s 00:23:48.988 user 0m21.183s 00:23:48.988 sys 0m9.658s 00:23:48.988 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.988 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:48.988 ************************************ 00:23:48.988 END TEST nvmf_fips 00:23:48.988 ************************************ 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:49.247 ************************************ 00:23:49.247 START TEST nvmf_control_msg_list 00:23:49.247 ************************************ 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:49.247 * Looking for test storage... 00:23:49.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:49.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.247 --rc genhtml_branch_coverage=1 00:23:49.247 --rc genhtml_function_coverage=1 00:23:49.247 --rc genhtml_legend=1 00:23:49.247 --rc geninfo_all_blocks=1 00:23:49.247 --rc geninfo_unexecuted_blocks=1 00:23:49.247 00:23:49.247 ' 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:49.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.247 --rc genhtml_branch_coverage=1 00:23:49.247 --rc genhtml_function_coverage=1 00:23:49.247 --rc genhtml_legend=1 00:23:49.247 --rc geninfo_all_blocks=1 00:23:49.247 --rc geninfo_unexecuted_blocks=1 00:23:49.247 00:23:49.247 ' 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:49.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.247 --rc genhtml_branch_coverage=1 00:23:49.247 --rc genhtml_function_coverage=1 00:23:49.247 --rc genhtml_legend=1 00:23:49.247 --rc geninfo_all_blocks=1 00:23:49.247 --rc geninfo_unexecuted_blocks=1 00:23:49.247 00:23:49.247 ' 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:49.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.247 --rc genhtml_branch_coverage=1 00:23:49.247 --rc genhtml_function_coverage=1 00:23:49.247 --rc genhtml_legend=1 00:23:49.247 --rc geninfo_all_blocks=1 00:23:49.247 --rc geninfo_unexecuted_blocks=1 00:23:49.247 00:23:49.247 ' 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:49.247 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.248 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:49.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:49.508 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:54.940 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:54.941 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:54.941 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:54.941 Found net devices under 0000:af:00.0: cvl_0_0 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:54.941 Found net devices under 0000:af:00.1: cvl_0_1 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.941 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:23:55.211 00:23:55.211 --- 10.0.0.2 ping statistics --- 00:23:55.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.211 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:23:55.211 00:23:55.211 --- 10.0.0.1 ping statistics --- 00:23:55.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.211 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1035732 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1035732 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1035732 ']' 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.211 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.212 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.212 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:55.212 [2024-12-14 16:37:25.257031] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:55.212 [2024-12-14 16:37:25.257085] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.476 [2024-12-14 16:37:25.334474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.476 [2024-12-14 16:37:25.355497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.476 [2024-12-14 16:37:25.355533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.476 [2024-12-14 16:37:25.355540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.476 [2024-12-14 16:37:25.355546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.476 [2024-12-14 16:37:25.355551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.476 [2024-12-14 16:37:25.356025] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:55.476 [2024-12-14 16:37:25.486082] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:55.476 Malloc0 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:55.476 [2024-12-14 16:37:25.538175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1035753 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1035754 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1035755 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1035753 00:23:55.476 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:55.735 [2024-12-14 16:37:25.628842] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:55.735 [2024-12-14 16:37:25.629046] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:55.735 [2024-12-14 16:37:25.629231] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:56.670 Initializing NVMe Controllers 00:23:56.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:56.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:56.670 Initialization complete. Launching workers. 00:23:56.670 ======================================================== 00:23:56.670 Latency(us) 00:23:56.670 Device Information : IOPS MiB/s Average min max 00:23:56.670 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 7015.00 27.40 142.22 128.83 338.80 00:23:56.670 ======================================================== 00:23:56.670 Total : 7015.00 27.40 142.22 128.83 338.80 00:23:56.670 00:23:56.929 Initializing NVMe Controllers 00:23:56.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:56.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:56.929 Initialization complete. Launching workers. 00:23:56.929 ======================================================== 00:23:56.929 Latency(us) 00:23:56.929 Device Information : IOPS MiB/s Average min max 00:23:56.929 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41176.70 40728.08 41917.72 00:23:56.929 ======================================================== 00:23:56.929 Total : 25.00 0.10 41176.70 40728.08 41917.72 00:23:56.929 00:23:56.929 Initializing NVMe Controllers 00:23:56.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:56.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:56.929 Initialization complete. Launching workers. 00:23:56.929 ======================================================== 00:23:56.929 Latency(us) 00:23:56.929 Device Information : IOPS MiB/s Average min max 00:23:56.929 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41089.43 40623.52 41915.05 00:23:56.929 ======================================================== 00:23:56.929 Total : 25.00 0.10 41089.43 40623.52 41915.05 00:23:56.929 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1035754 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1035755 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:56.929 rmmod nvme_tcp 00:23:56.929 rmmod nvme_fabrics 00:23:56.929 rmmod nvme_keyring 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1035732 ']' 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1035732 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1035732 ']' 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1035732 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.929 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1035732 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1035732' 00:23:57.188 killing process with pid 1035732 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1035732 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1035732 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.188 16:37:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.724 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:59.724 00:23:59.724 real 0m10.141s 00:23:59.724 user 0m6.829s 00:23:59.724 sys 0m5.464s 00:23:59.724 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.724 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:59.724 ************************************ 00:23:59.724 END TEST nvmf_control_msg_list 00:23:59.724 ************************************ 00:23:59.724 16:37:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:59.724 16:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:59.724 16:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.724 16:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:59.724 ************************************ 00:23:59.724 START TEST nvmf_wait_for_buf 00:23:59.724 ************************************ 00:23:59.724 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:59.724 * Looking for test storage... 00:23:59.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:59.724 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:59.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.725 --rc genhtml_branch_coverage=1 00:23:59.725 --rc genhtml_function_coverage=1 00:23:59.725 --rc genhtml_legend=1 00:23:59.725 --rc geninfo_all_blocks=1 00:23:59.725 --rc geninfo_unexecuted_blocks=1 00:23:59.725 00:23:59.725 ' 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:59.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.725 --rc genhtml_branch_coverage=1 00:23:59.725 --rc genhtml_function_coverage=1 00:23:59.725 --rc genhtml_legend=1 00:23:59.725 --rc geninfo_all_blocks=1 00:23:59.725 --rc geninfo_unexecuted_blocks=1 00:23:59.725 00:23:59.725 ' 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:59.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.725 --rc genhtml_branch_coverage=1 00:23:59.725 --rc genhtml_function_coverage=1 00:23:59.725 --rc genhtml_legend=1 00:23:59.725 --rc geninfo_all_blocks=1 00:23:59.725 --rc geninfo_unexecuted_blocks=1 00:23:59.725 00:23:59.725 ' 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:59.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.725 --rc genhtml_branch_coverage=1 00:23:59.725 --rc genhtml_function_coverage=1 00:23:59.725 --rc genhtml_legend=1 00:23:59.725 --rc geninfo_all_blocks=1 00:23:59.725 --rc geninfo_unexecuted_blocks=1 00:23:59.725 00:23:59.725 ' 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:59.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:59.725 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:59.726 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:06.295 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:06.295 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:06.295 Found net devices under 0000:af:00.0: cvl_0_0 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:06.295 Found net devices under 0000:af:00.1: cvl_0_1 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.295 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:06.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:24:06.296 00:24:06.296 --- 10.0.0.2 ping statistics --- 00:24:06.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.296 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:24:06.296 00:24:06.296 --- 10.0.0.1 ping statistics --- 00:24:06.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.296 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1039445 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1039445 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1039445 ']' 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 [2024-12-14 16:37:35.479960] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:06.296 [2024-12-14 16:37:35.480004] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.296 [2024-12-14 16:37:35.554836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.296 [2024-12-14 16:37:35.575792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.296 [2024-12-14 16:37:35.575828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.296 [2024-12-14 16:37:35.575835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.296 [2024-12-14 16:37:35.575844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.296 [2024-12-14 16:37:35.575850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.296 [2024-12-14 16:37:35.576378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 Malloc0 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 [2024-12-14 16:37:35.756438] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 [2024-12-14 16:37:35.784618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.296 16:37:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:06.296 [2024-12-14 16:37:35.866458] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:07.674 Initializing NVMe Controllers 00:24:07.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:07.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:07.674 Initialization complete. Launching workers. 00:24:07.674 ======================================================== 00:24:07.674 Latency(us) 00:24:07.674 Device Information : IOPS MiB/s Average min max 00:24:07.674 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 30.00 3.75 138025.92 7273.61 191538.80 00:24:07.674 ======================================================== 00:24:07.674 Total : 30.00 3.75 138025.92 7273.61 191538.80 00:24:07.674 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=454 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 454 -eq 0 ]] 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:07.674 rmmod nvme_tcp 00:24:07.674 rmmod nvme_fabrics 00:24:07.674 rmmod nvme_keyring 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1039445 ']' 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1039445 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1039445 ']' 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1039445 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1039445 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1039445' 00:24:07.674 killing process with pid 1039445 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1039445 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1039445 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.674 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:10.210 00:24:10.210 real 0m10.432s 00:24:10.210 user 0m4.064s 00:24:10.210 sys 0m4.815s 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:10.210 ************************************ 00:24:10.210 END TEST nvmf_wait_for_buf 00:24:10.210 ************************************ 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:10.210 ************************************ 00:24:10.210 START TEST nvmf_fuzz 00:24:10.210 ************************************ 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:10.210 * Looking for test storage... 00:24:10.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.210 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.210 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.211 --rc genhtml_branch_coverage=1 00:24:10.211 --rc genhtml_function_coverage=1 00:24:10.211 --rc genhtml_legend=1 00:24:10.211 --rc geninfo_all_blocks=1 00:24:10.211 --rc geninfo_unexecuted_blocks=1 00:24:10.211 00:24:10.211 ' 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.211 --rc genhtml_branch_coverage=1 00:24:10.211 --rc genhtml_function_coverage=1 00:24:10.211 --rc genhtml_legend=1 00:24:10.211 --rc geninfo_all_blocks=1 00:24:10.211 --rc geninfo_unexecuted_blocks=1 00:24:10.211 00:24:10.211 ' 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.211 --rc genhtml_branch_coverage=1 00:24:10.211 --rc genhtml_function_coverage=1 00:24:10.211 --rc genhtml_legend=1 00:24:10.211 --rc geninfo_all_blocks=1 00:24:10.211 --rc geninfo_unexecuted_blocks=1 00:24:10.211 00:24:10.211 ' 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.211 --rc genhtml_branch_coverage=1 00:24:10.211 --rc genhtml_function_coverage=1 00:24:10.211 --rc genhtml_legend=1 00:24:10.211 --rc geninfo_all_blocks=1 00:24:10.211 --rc geninfo_unexecuted_blocks=1 00:24:10.211 00:24:10.211 ' 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:10.211 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:16.781 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.781 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:16.782 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:16.782 Found net devices under 0000:af:00.0: cvl_0_0 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:16.782 Found net devices under 0000:af:00.1: cvl_0_1 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:16.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:24:16.782 00:24:16.782 --- 10.0.0.2 ping statistics --- 00:24:16.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.782 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:24:16.782 00:24:16.782 --- 10.0.0.1 ping statistics --- 00:24:16.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.782 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1043358 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1043358 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1043358 ']' 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.782 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.782 Malloc0 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:16.782 16:37:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:48.859 Fuzzing completed. Shutting down the fuzz application 00:24:48.859 00:24:48.859 Dumping successful admin opcodes: 00:24:48.859 9, 10, 00:24:48.859 Dumping successful io opcodes: 00:24:48.859 0, 9, 00:24:48.859 NS: 0x2000008eff00 I/O qp, Total commands completed: 905703, total successful commands: 5275, random_seed: 2346681856 00:24:48.859 NS: 0x2000008eff00 admin qp, Total commands completed: 88720, total successful commands: 20, random_seed: 22773696 00:24:48.859 16:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:48.859 Fuzzing completed. Shutting down the fuzz application 00:24:48.859 00:24:48.859 Dumping successful admin opcodes: 00:24:48.859 00:24:48.859 Dumping successful io opcodes: 00:24:48.860 00:24:48.860 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3183779006 00:24:48.860 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 3183840636 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:48.860 rmmod nvme_tcp 00:24:48.860 rmmod nvme_fabrics 00:24:48.860 rmmod nvme_keyring 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 1043358 ']' 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 1043358 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1043358 ']' 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 1043358 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1043358 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1043358' 00:24:48.860 killing process with pid 1043358 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 1043358 00:24:48.860 16:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 1043358 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.860 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.238 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:50.238 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:50.238 00:24:50.238 real 0m40.330s 00:24:50.238 user 0m51.911s 00:24:50.238 sys 0m17.368s 00:24:50.238 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.238 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:50.238 ************************************ 00:24:50.238 END TEST nvmf_fuzz 00:24:50.238 ************************************ 00:24:50.238 16:38:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:50.238 16:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:50.238 16:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.238 16:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:50.238 ************************************ 00:24:50.238 START TEST nvmf_multiconnection 00:24:50.238 ************************************ 00:24:50.238 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:50.497 * Looking for test storage... 00:24:50.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:50.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.498 --rc genhtml_branch_coverage=1 00:24:50.498 --rc genhtml_function_coverage=1 00:24:50.498 --rc genhtml_legend=1 00:24:50.498 --rc geninfo_all_blocks=1 00:24:50.498 --rc geninfo_unexecuted_blocks=1 00:24:50.498 00:24:50.498 ' 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:50.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.498 --rc genhtml_branch_coverage=1 00:24:50.498 --rc genhtml_function_coverage=1 00:24:50.498 --rc genhtml_legend=1 00:24:50.498 --rc geninfo_all_blocks=1 00:24:50.498 --rc geninfo_unexecuted_blocks=1 00:24:50.498 00:24:50.498 ' 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:50.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.498 --rc genhtml_branch_coverage=1 00:24:50.498 --rc genhtml_function_coverage=1 00:24:50.498 --rc genhtml_legend=1 00:24:50.498 --rc geninfo_all_blocks=1 00:24:50.498 --rc geninfo_unexecuted_blocks=1 00:24:50.498 00:24:50.498 ' 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:50.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.498 --rc genhtml_branch_coverage=1 00:24:50.498 --rc genhtml_function_coverage=1 00:24:50.498 --rc genhtml_legend=1 00:24:50.498 --rc geninfo_all_blocks=1 00:24:50.498 --rc geninfo_unexecuted_blocks=1 00:24:50.498 00:24:50.498 ' 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.498 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:24:50.499 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:57.067 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:57.067 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.067 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:57.067 Found net devices under 0000:af:00.0: cvl_0_0 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:57.068 Found net devices under 0000:af:00.1: cvl_0_1 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:24:57.068 00:24:57.068 --- 10.0.0.2 ping statistics --- 00:24:57.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.068 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:24:57.068 00:24:57.068 --- 10.0.0.1 ping statistics --- 00:24:57.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.068 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=1051822 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 1051822 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 1051822 ']' 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.068 [2024-12-14 16:38:26.410536] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:57.068 [2024-12-14 16:38:26.410603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.068 [2024-12-14 16:38:26.487900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.068 [2024-12-14 16:38:26.512355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.068 [2024-12-14 16:38:26.512397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.068 [2024-12-14 16:38:26.512404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.068 [2024-12-14 16:38:26.512410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.068 [2024-12-14 16:38:26.512415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.068 [2024-12-14 16:38:26.513738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.068 [2024-12-14 16:38:26.513849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.068 [2024-12-14 16:38:26.513933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.068 [2024-12-14 16:38:26.513934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.068 [2024-12-14 16:38:26.646799] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.068 Malloc1 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.068 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 [2024-12-14 16:38:26.712290] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 Malloc2 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 Malloc3 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 Malloc4 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 Malloc5 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 Malloc6 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.069 Malloc7 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.069 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 Malloc8 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 Malloc9 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 Malloc10 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.070 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.329 Malloc11 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.329 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:58.264 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:58.264 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:58.264 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.264 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:58.264 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:00.273 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:00.273 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:00.273 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:00.273 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:00.273 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:00.273 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:00.273 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.273 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:01.649 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:01.649 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:01.649 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.649 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:01.649 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:03.552 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:03.552 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:03.552 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:03.552 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:03.552 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.552 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:03.552 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.552 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:04.929 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:04.929 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:04.929 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.929 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:04.929 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:06.839 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:06.839 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:06.839 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:06.839 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:06.839 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.839 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:06.839 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.839 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:08.215 16:38:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:08.215 16:38:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:08.215 16:38:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:08.215 16:38:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:08.215 16:38:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:10.117 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:10.117 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:10.117 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:10.117 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:10.117 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:10.117 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:10.117 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:10.117 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:11.495 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:11.495 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:11.495 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:11.495 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:11.495 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:13.404 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:13.404 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:13.404 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:13.404 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:13.404 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:13.404 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:13.404 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.404 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:14.782 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:14.782 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:14.782 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:14.782 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:14.782 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:16.686 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:16.686 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:16.686 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:16.686 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:16.686 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:16.686 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:16.686 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.686 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:18.064 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:18.064 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:18.064 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.064 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:18.064 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:19.971 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:19.971 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:19.971 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:19.971 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:19.971 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:19.971 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:19.971 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.971 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:21.348 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:21.349 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:21.349 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.349 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:21.349 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:23.254 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:23.254 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:23.254 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:23.254 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:23.254 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.254 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:23.254 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.254 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:24.633 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:24.633 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:24.633 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.633 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:24.633 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:27.173 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:27.173 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:27.173 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:27.173 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:27.173 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:27.173 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:27.173 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.173 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:28.110 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:28.110 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:28.110 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.110 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:28.110 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:30.645 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:30.645 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:30.645 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:30.645 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:30.645 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:30.645 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:30.645 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.645 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:32.024 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:32.024 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:32.024 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.024 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:32.024 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:33.930 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:33.930 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:33.930 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:33.930 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:33.930 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.930 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:33.930 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:33.930 [global] 00:25:33.930 thread=1 00:25:33.930 invalidate=1 00:25:33.930 rw=read 00:25:33.930 time_based=1 00:25:33.930 runtime=10 00:25:33.930 ioengine=libaio 00:25:33.930 direct=1 00:25:33.930 bs=262144 00:25:33.930 iodepth=64 00:25:33.930 norandommap=1 00:25:33.930 numjobs=1 00:25:33.930 00:25:33.930 [job0] 00:25:33.930 filename=/dev/nvme0n1 00:25:33.930 [job1] 00:25:33.930 filename=/dev/nvme10n1 00:25:33.930 [job2] 00:25:33.930 filename=/dev/nvme1n1 00:25:33.930 [job3] 00:25:33.930 filename=/dev/nvme2n1 00:25:33.930 [job4] 00:25:33.930 filename=/dev/nvme3n1 00:25:33.930 [job5] 00:25:33.930 filename=/dev/nvme4n1 00:25:33.930 [job6] 00:25:33.930 filename=/dev/nvme5n1 00:25:33.930 [job7] 00:25:33.930 filename=/dev/nvme6n1 00:25:33.930 [job8] 00:25:33.930 filename=/dev/nvme7n1 00:25:33.930 [job9] 00:25:33.930 filename=/dev/nvme8n1 00:25:33.930 [job10] 00:25:33.930 filename=/dev/nvme9n1 00:25:33.930 Could not set queue depth (nvme0n1) 00:25:33.930 Could not set queue depth (nvme10n1) 00:25:33.930 Could not set queue depth (nvme1n1) 00:25:33.930 Could not set queue depth (nvme2n1) 00:25:33.930 Could not set queue depth (nvme3n1) 00:25:33.931 Could not set queue depth (nvme4n1) 00:25:33.931 Could not set queue depth (nvme5n1) 00:25:33.931 Could not set queue depth (nvme6n1) 00:25:33.931 Could not set queue depth (nvme7n1) 00:25:33.931 Could not set queue depth (nvme8n1) 00:25:33.931 Could not set queue depth (nvme9n1) 00:25:34.190 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.190 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.190 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.190 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.190 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.190 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.190 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.190 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.190 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.190 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.190 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:34.190 fio-3.35 00:25:34.190 Starting 11 threads 00:25:46.630 00:25:46.630 job0: (groupid=0, jobs=1): err= 0: pid=1058349: Sat Dec 14 16:39:14 2024 00:25:46.630 read: IOPS=129, BW=32.4MiB/s (34.0MB/s)(328MiB/10126msec) 00:25:46.630 slat (usec): min=8, max=499752, avg=5626.18, stdev=29673.57 00:25:46.630 clat (msec): min=13, max=1176, avg=487.65, stdev=267.86 00:25:46.630 lat (msec): min=13, max=1176, avg=493.28, stdev=271.35 00:25:46.630 clat percentiles (msec): 00:25:46.630 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 184], 00:25:46.630 | 30.00th=[ 380], 40.00th=[ 514], 50.00th=[ 558], 60.00th=[ 592], 00:25:46.630 | 70.00th=[ 659], 80.00th=[ 718], 90.00th=[ 793], 95.00th=[ 852], 00:25:46.630 | 99.00th=[ 911], 99.50th=[ 944], 99.90th=[ 978], 99.95th=[ 1183], 00:25:46.630 | 99.99th=[ 1183] 00:25:46.630 bw ( KiB/s): min= 6144, max=90624, per=4.48%, avg=31948.80, stdev=18213.93, samples=20 00:25:46.630 iops : min= 24, max= 354, avg=124.80, stdev=71.15, samples=20 00:25:46.630 lat (msec) : 20=2.82%, 50=12.35%, 100=2.29%, 250=5.34%, 500=15.47% 00:25:46.630 lat (msec) : 750=46.11%, 1000=15.55%, 2000=0.08% 00:25:46.630 cpu : usr=0.09%, sys=0.54%, ctx=354, majf=0, minf=3722 00:25:46.630 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:25:46.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.630 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.630 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.630 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.630 job1: (groupid=0, jobs=1): err= 0: pid=1058350: Sat Dec 14 16:39:14 2024 00:25:46.630 read: IOPS=222, BW=55.5MiB/s (58.2MB/s)(562MiB/10118msec) 00:25:46.630 slat (usec): min=9, max=177574, avg=2235.78, stdev=13264.70 00:25:46.630 clat (usec): min=908, max=913360, avg=285558.12, stdev=281523.47 00:25:46.630 lat (usec): min=966, max=991487, avg=287793.90, stdev=284077.50 00:25:46.630 clat percentiles (usec): 00:25:46.630 | 1.00th=[ 1106], 5.00th=[ 1696], 10.00th=[ 2245], 20.00th=[ 5735], 00:25:46.630 | 30.00th=[ 21627], 40.00th=[ 37487], 50.00th=[189793], 60.00th=[387974], 00:25:46.630 | 70.00th=[522191], 80.00th=[583009], 90.00th=[692061], 95.00th=[750781], 00:25:46.630 | 99.00th=[843056], 99.50th=[884999], 99.90th=[910164], 99.95th=[910164], 00:25:46.630 | 99.99th=[910164] 00:25:46.630 bw ( KiB/s): min=19456, max=196608, per=7.84%, avg=55910.40, stdev=53003.77, samples=20 00:25:46.630 iops : min= 76, max= 768, avg=218.40, stdev=207.05, samples=20 00:25:46.630 lat (usec) : 1000=0.27% 00:25:46.630 lat (msec) : 2=7.47%, 4=7.47%, 10=7.83%, 20=4.98%, 50=13.52% 00:25:46.630 lat (msec) : 100=2.85%, 250=7.96%, 500=16.28%, 750=26.42%, 1000=4.94% 00:25:46.630 cpu : usr=0.11%, sys=0.88%, ctx=1050, majf=0, minf=4097 00:25:46.630 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:25:46.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.630 issued rwts: total=2248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.630 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.630 job2: (groupid=0, jobs=1): err= 0: pid=1058351: Sat Dec 14 16:39:14 2024 00:25:46.630 read: IOPS=230, BW=57.6MiB/s (60.4MB/s)(577MiB/10016msec) 00:25:46.630 slat (usec): min=15, max=154887, avg=4266.75, stdev=15769.58 00:25:46.630 clat (msec): min=2, max=569, avg=273.21, stdev=118.16 00:25:46.630 lat (msec): min=3, max=569, avg=277.48, stdev=119.69 00:25:46.630 clat percentiles (msec): 00:25:46.630 | 1.00th=[ 6], 5.00th=[ 41], 10.00th=[ 121], 20.00th=[ 174], 00:25:46.630 | 30.00th=[ 213], 40.00th=[ 255], 50.00th=[ 275], 60.00th=[ 305], 00:25:46.630 | 70.00th=[ 330], 80.00th=[ 372], 90.00th=[ 426], 95.00th=[ 460], 00:25:46.630 | 99.00th=[ 558], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:25:46.630 | 99.99th=[ 567] 00:25:46.630 bw ( KiB/s): min=28160, max=114176, per=8.06%, avg=57477.15, stdev=19728.27, samples=20 00:25:46.630 iops : min= 110, max= 446, avg=224.50, stdev=77.07, samples=20 00:25:46.630 lat (msec) : 4=0.61%, 10=0.69%, 20=1.43%, 50=2.86%, 100=1.56% 00:25:46.630 lat (msec) : 250=32.24%, 500=57.97%, 750=2.64% 00:25:46.630 cpu : usr=0.13%, sys=0.99%, ctx=350, majf=0, minf=4097 00:25:46.630 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:25:46.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.630 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.630 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.630 job3: (groupid=0, jobs=1): err= 0: pid=1058355: Sat Dec 14 16:39:14 2024 00:25:46.630 read: IOPS=383, BW=96.0MiB/s (101MB/s)(971MiB/10118msec) 00:25:46.630 slat (usec): min=9, max=670696, avg=2572.31, stdev=17630.56 00:25:46.630 clat (msec): min=19, max=1226, avg=163.96, stdev=216.96 00:25:46.630 lat (msec): min=19, max=1226, avg=166.53, stdev=220.22 00:25:46.630 clat percentiles (msec): 00:25:46.630 | 1.00th=[ 22], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 26], 00:25:46.630 | 30.00th=[ 28], 40.00th=[ 62], 50.00th=[ 80], 60.00th=[ 88], 00:25:46.630 | 70.00th=[ 109], 80.00th=[ 236], 90.00th=[ 584], 95.00th=[ 684], 00:25:46.630 | 99.00th=[ 827], 99.50th=[ 911], 99.90th=[ 936], 99.95th=[ 936], 00:25:46.630 | 99.99th=[ 1234] 00:25:46.630 bw ( KiB/s): min=15872, max=637440, per=13.71%, avg=97792.00, stdev=141546.72, samples=20 00:25:46.630 iops : min= 62, max= 2490, avg=382.00, stdev=552.92, samples=20 00:25:46.630 lat (msec) : 20=0.05%, 50=38.80%, 100=28.60%, 250=14.44%, 500=4.27% 00:25:46.630 lat (msec) : 750=11.30%, 1000=2.50%, 2000=0.03% 00:25:46.630 cpu : usr=0.14%, sys=1.26%, ctx=645, majf=0, minf=4097 00:25:46.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:46.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.630 issued rwts: total=3884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.630 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.630 job4: (groupid=0, jobs=1): err= 0: pid=1058358: Sat Dec 14 16:39:14 2024 00:25:46.630 read: IOPS=237, BW=59.4MiB/s (62.3MB/s)(598MiB/10062msec) 00:25:46.630 slat (usec): min=14, max=202963, avg=3534.45, stdev=13849.40 00:25:46.630 clat (msec): min=4, max=709, avg=265.56, stdev=118.83 00:25:46.630 lat (msec): min=4, max=709, avg=269.09, stdev=120.10 00:25:46.630 clat percentiles (msec): 00:25:46.630 | 1.00th=[ 36], 5.00th=[ 62], 10.00th=[ 121], 20.00th=[ 171], 00:25:46.630 | 30.00th=[ 207], 40.00th=[ 232], 50.00th=[ 253], 60.00th=[ 284], 00:25:46.630 | 70.00th=[ 317], 80.00th=[ 363], 90.00th=[ 422], 95.00th=[ 451], 00:25:46.630 | 99.00th=[ 651], 99.50th=[ 693], 99.90th=[ 709], 99.95th=[ 709], 00:25:46.630 | 99.99th=[ 709] 00:25:46.630 bw ( KiB/s): min=34816, max=118272, per=8.35%, avg=59550.20, stdev=20753.97, samples=20 00:25:46.630 iops : min= 136, max= 462, avg=232.60, stdev=81.08, samples=20 00:25:46.630 lat (msec) : 10=0.17%, 20=0.13%, 50=2.85%, 100=5.36%, 250=39.75% 00:25:46.630 lat (msec) : 500=48.83%, 750=2.93% 00:25:46.630 cpu : usr=0.11%, sys=0.99%, ctx=336, majf=0, minf=4097 00:25:46.630 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:25:46.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.630 issued rwts: total=2390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.630 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.630 job5: (groupid=0, jobs=1): err= 0: pid=1058360: Sat Dec 14 16:39:14 2024 00:25:46.630 read: IOPS=243, BW=60.8MiB/s (63.8MB/s)(615MiB/10110msec) 00:25:46.630 slat (usec): min=10, max=149535, avg=3680.11, stdev=15396.56 00:25:46.630 clat (usec): min=1156, max=718442, avg=259040.31, stdev=146550.34 00:25:46.630 lat (usec): min=1294, max=849614, avg=262720.42, stdev=148439.46 00:25:46.630 clat percentiles (msec): 00:25:46.630 | 1.00th=[ 10], 5.00th=[ 59], 10.00th=[ 74], 20.00th=[ 127], 00:25:46.630 | 30.00th=[ 174], 40.00th=[ 207], 50.00th=[ 243], 60.00th=[ 288], 00:25:46.630 | 70.00th=[ 326], 80.00th=[ 384], 90.00th=[ 468], 95.00th=[ 535], 00:25:46.630 | 99.00th=[ 642], 99.50th=[ 693], 99.90th=[ 718], 99.95th=[ 718], 00:25:46.630 | 99.99th=[ 718] 00:25:46.630 bw ( KiB/s): min=18944, max=159744, per=8.60%, avg=61363.20, stdev=36648.64, samples=20 00:25:46.630 iops : min= 74, max= 624, avg=239.70, stdev=143.16, samples=20 00:25:46.630 lat (msec) : 2=0.16%, 4=0.28%, 10=0.85%, 20=1.18%, 50=1.99% 00:25:46.630 lat (msec) : 100=11.42%, 250=36.14%, 500=41.75%, 750=6.22% 00:25:46.630 cpu : usr=0.07%, sys=1.03%, ctx=429, majf=0, minf=4097 00:25:46.630 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:25:46.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.630 issued rwts: total=2460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.630 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.630 job6: (groupid=0, jobs=1): err= 0: pid=1058361: Sat Dec 14 16:39:14 2024 00:25:46.630 read: IOPS=117, BW=29.3MiB/s (30.7MB/s)(297MiB/10125msec) 00:25:46.630 slat (usec): min=17, max=215525, avg=7437.63, stdev=24995.60 00:25:46.630 clat (msec): min=18, max=1040, avg=538.20, stdev=194.30 00:25:46.630 lat (msec): min=18, max=1040, avg=545.64, stdev=197.59 00:25:46.630 clat percentiles (msec): 00:25:46.630 | 1.00th=[ 29], 5.00th=[ 159], 10.00th=[ 211], 20.00th=[ 426], 00:25:46.630 | 30.00th=[ 498], 40.00th=[ 542], 50.00th=[ 558], 60.00th=[ 592], 00:25:46.630 | 70.00th=[ 651], 80.00th=[ 701], 90.00th=[ 751], 95.00th=[ 802], 00:25:46.630 | 99.00th=[ 902], 99.50th=[ 927], 99.90th=[ 944], 99.95th=[ 1045], 00:25:46.630 | 99.99th=[ 1045] 00:25:46.630 bw ( KiB/s): min=17920, max=60416, per=4.03%, avg=28726.90, stdev=9805.49, samples=20 00:25:46.630 iops : min= 70, max= 236, avg=112.20, stdev=38.29, samples=20 00:25:46.630 lat (msec) : 20=0.51%, 50=2.28%, 100=1.43%, 250=7.76%, 500=18.80% 00:25:46.630 lat (msec) : 750=58.35%, 1000=10.79%, 2000=0.08% 00:25:46.630 cpu : usr=0.02%, sys=0.67%, ctx=255, majf=0, minf=4097 00:25:46.630 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:25:46.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.630 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.630 issued rwts: total=1186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.630 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.630 job7: (groupid=0, jobs=1): err= 0: pid=1058362: Sat Dec 14 16:39:14 2024 00:25:46.630 read: IOPS=386, BW=96.7MiB/s (101MB/s)(979MiB/10119msec) 00:25:46.630 slat (usec): min=14, max=540949, avg=2388.23, stdev=17557.60 00:25:46.630 clat (usec): min=1167, max=1180.6k, avg=162839.13, stdev=214824.32 00:25:46.630 lat (usec): min=1199, max=1180.7k, avg=165227.37, stdev=217930.31 00:25:46.630 clat percentiles (msec): 00:25:46.630 | 1.00th=[ 4], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 44], 00:25:46.630 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 68], 00:25:46.630 | 70.00th=[ 110], 80.00th=[ 197], 90.00th=[ 575], 95.00th=[ 684], 00:25:46.630 | 99.00th=[ 785], 99.50th=[ 810], 99.90th=[ 852], 99.95th=[ 1116], 00:25:46.630 | 99.99th=[ 1183] 00:25:46.630 bw ( KiB/s): min=16896, max=341504, per=14.55%, avg=103750.84, stdev=115706.65, samples=19 00:25:46.630 iops : min= 66, max= 1334, avg=405.26, stdev=451.99, samples=19 00:25:46.630 lat (msec) : 2=0.77%, 4=0.54%, 10=0.82%, 20=0.46%, 50=34.47% 00:25:46.630 lat (msec) : 100=30.48%, 250=14.44%, 500=3.07%, 750=12.90%, 1000=1.99% 00:25:46.630 lat (msec) : 2000=0.08% 00:25:46.630 cpu : usr=0.17%, sys=1.59%, ctx=611, majf=0, minf=4097 00:25:46.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:46.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.630 issued rwts: total=3914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.630 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.630 job8: (groupid=0, jobs=1): err= 0: pid=1058365: Sat Dec 14 16:39:14 2024 00:25:46.630 read: IOPS=338, BW=84.5MiB/s (88.6MB/s)(850MiB/10055msec) 00:25:46.630 slat (usec): min=15, max=528952, avg=2268.97, stdev=17914.20 00:25:46.630 clat (usec): min=1067, max=1153.7k, avg=186812.46, stdev=215133.91 00:25:46.630 lat (usec): min=1106, max=1153.7k, avg=189081.43, stdev=218249.80 00:25:46.630 clat percentiles (msec): 00:25:46.630 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 35], 00:25:46.630 | 30.00th=[ 68], 40.00th=[ 75], 50.00th=[ 91], 60.00th=[ 113], 00:25:46.630 | 70.00th=[ 138], 80.00th=[ 447], 90.00th=[ 567], 95.00th=[ 625], 00:25:46.630 | 99.00th=[ 793], 99.50th=[ 835], 99.90th=[ 995], 99.95th=[ 1003], 00:25:46.630 | 99.99th=[ 1150] 00:25:46.630 bw ( KiB/s): min=15360, max=241664, per=11.98%, avg=85427.20, stdev=72729.29, samples=20 00:25:46.630 iops : min= 60, max= 944, avg=333.70, stdev=284.10, samples=20 00:25:46.630 lat (msec) : 2=0.26%, 4=4.06%, 10=6.71%, 20=4.09%, 50=7.59% 00:25:46.630 lat (msec) : 100=30.29%, 250=22.71%, 500=8.35%, 750=14.15%, 1000=1.74% 00:25:46.630 lat (msec) : 2000=0.06% 00:25:46.630 cpu : usr=0.21%, sys=1.59%, ctx=1213, majf=0, minf=4097 00:25:46.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:25:46.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.630 issued rwts: total=3400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.630 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.630 job9: (groupid=0, jobs=1): err= 0: pid=1058366: Sat Dec 14 16:39:14 2024 00:25:46.630 read: IOPS=329, BW=82.4MiB/s (86.4MB/s)(825MiB/10018msec) 00:25:46.630 slat (usec): min=14, max=144742, avg=2894.90, stdev=11710.21 00:25:46.630 clat (msec): min=2, max=506, avg=191.18, stdev=126.81 00:25:46.630 lat (msec): min=2, max=506, avg=194.07, stdev=128.66 00:25:46.630 clat percentiles (msec): 00:25:46.631 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 39], 20.00th=[ 46], 00:25:46.631 | 30.00th=[ 50], 40.00th=[ 182], 50.00th=[ 218], 60.00th=[ 234], 00:25:46.631 | 70.00th=[ 262], 80.00th=[ 309], 90.00th=[ 363], 95.00th=[ 401], 00:25:46.631 | 99.00th=[ 451], 99.50th=[ 468], 99.90th=[ 506], 99.95th=[ 506], 00:25:46.631 | 99.99th=[ 506] 00:25:46.631 bw ( KiB/s): min=38912, max=376832, per=11.62%, avg=82867.20, stdev=78007.85, samples=20 00:25:46.631 iops : min= 152, max= 1472, avg=323.70, stdev=304.72, samples=20 00:25:46.631 lat (msec) : 4=0.09%, 10=3.48%, 20=3.21%, 50=24.09%, 100=3.33% 00:25:46.631 lat (msec) : 250=33.76%, 500=31.88%, 750=0.15% 00:25:46.631 cpu : usr=0.08%, sys=1.46%, ctx=481, majf=0, minf=4097 00:25:46.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:46.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.631 issued rwts: total=3300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.631 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.631 job10: (groupid=0, jobs=1): err= 0: pid=1058367: Sat Dec 14 16:39:14 2024 00:25:46.631 read: IOPS=179, BW=44.8MiB/s (47.0MB/s)(453MiB/10116msec) 00:25:46.631 slat (usec): min=10, max=362730, avg=3528.84, stdev=19866.72 00:25:46.631 clat (usec): min=741, max=1007.1k, avg=353293.70, stdev=282478.91 00:25:46.631 lat (usec): min=766, max=1007.2k, avg=356822.54, stdev=285768.94 00:25:46.631 clat percentiles (usec): 00:25:46.631 | 1.00th=[ 848], 5.00th=[ 3621], 10.00th=[ 13960], 00:25:46.631 | 20.00th=[ 30540], 30.00th=[ 69731], 40.00th=[ 179307], 00:25:46.631 | 50.00th=[ 400557], 60.00th=[ 488637], 70.00th=[ 566232], 00:25:46.631 | 80.00th=[ 633340], 90.00th=[ 725615], 95.00th=[ 784335], 00:25:46.631 | 99.00th=[ 851444], 99.50th=[ 884999], 99.90th=[ 985662], 00:25:46.631 | 99.95th=[1010828], 99.99th=[1010828] 00:25:46.631 bw ( KiB/s): min=13312, max=119808, per=6.28%, avg=44774.40, stdev=31336.51, samples=20 00:25:46.631 iops : min= 52, max= 468, avg=174.90, stdev=122.41, samples=20 00:25:46.631 lat (usec) : 750=0.06%, 1000=2.21% 00:25:46.631 lat (msec) : 2=0.83%, 4=2.48%, 10=2.43%, 20=6.57%, 50=11.37% 00:25:46.631 lat (msec) : 100=9.05%, 250=8.83%, 500=17.00%, 750=31.62%, 1000=7.51% 00:25:46.631 lat (msec) : 2000=0.06% 00:25:46.631 cpu : usr=0.07%, sys=0.68%, ctx=660, majf=0, minf=4097 00:25:46.631 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:25:46.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.631 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:46.631 issued rwts: total=1812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.631 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:46.631 00:25:46.631 Run status group 0 (all jobs): 00:25:46.631 READ: bw=697MiB/s (730MB/s), 29.3MiB/s-96.7MiB/s (30.7MB/s-101MB/s), io=7054MiB (7396MB), run=10016-10126msec 00:25:46.631 00:25:46.631 Disk stats (read/write): 00:25:46.631 nvme0n1: ios=2487/0, merge=0/0, ticks=1189019/0, in_queue=1189019, util=97.24% 00:25:46.631 nvme10n1: ios=4303/0, merge=0/0, ticks=1211464/0, in_queue=1211464, util=97.36% 00:25:46.631 nvme1n1: ios=4429/0, merge=0/0, ticks=1237116/0, in_queue=1237116, util=97.63% 00:25:46.631 nvme2n1: ios=7647/0, merge=0/0, ticks=1193032/0, in_queue=1193032, util=97.80% 00:25:46.631 nvme3n1: ios=4628/0, merge=0/0, ticks=1243739/0, in_queue=1243739, util=97.90% 00:25:46.631 nvme4n1: ios=4681/0, merge=0/0, ticks=1234527/0, in_queue=1234527, util=98.20% 00:25:46.631 nvme5n1: ios=2239/0, merge=0/0, ticks=1205664/0, in_queue=1205664, util=98.39% 00:25:46.631 nvme6n1: ios=7681/0, merge=0/0, ticks=1186567/0, in_queue=1186567, util=98.50% 00:25:46.631 nvme7n1: ios=6602/0, merge=0/0, ticks=1236110/0, in_queue=1236110, util=98.87% 00:25:46.631 nvme8n1: ios=6341/0, merge=0/0, ticks=1240138/0, in_queue=1240138, util=99.03% 00:25:46.631 nvme9n1: ios=3462/0, merge=0/0, ticks=1190401/0, in_queue=1190401, util=99.14% 00:25:46.631 16:39:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:46.631 [global] 00:25:46.631 thread=1 00:25:46.631 invalidate=1 00:25:46.631 rw=randwrite 00:25:46.631 time_based=1 00:25:46.631 runtime=10 00:25:46.631 ioengine=libaio 00:25:46.631 direct=1 00:25:46.631 bs=262144 00:25:46.631 iodepth=64 00:25:46.631 norandommap=1 00:25:46.631 numjobs=1 00:25:46.631 00:25:46.631 [job0] 00:25:46.631 filename=/dev/nvme0n1 00:25:46.631 [job1] 00:25:46.631 filename=/dev/nvme10n1 00:25:46.631 [job2] 00:25:46.631 filename=/dev/nvme1n1 00:25:46.631 [job3] 00:25:46.631 filename=/dev/nvme2n1 00:25:46.631 [job4] 00:25:46.631 filename=/dev/nvme3n1 00:25:46.631 [job5] 00:25:46.631 filename=/dev/nvme4n1 00:25:46.631 [job6] 00:25:46.631 filename=/dev/nvme5n1 00:25:46.631 [job7] 00:25:46.631 filename=/dev/nvme6n1 00:25:46.631 [job8] 00:25:46.631 filename=/dev/nvme7n1 00:25:46.631 [job9] 00:25:46.631 filename=/dev/nvme8n1 00:25:46.631 [job10] 00:25:46.631 filename=/dev/nvme9n1 00:25:46.631 Could not set queue depth (nvme0n1) 00:25:46.631 Could not set queue depth (nvme10n1) 00:25:46.631 Could not set queue depth (nvme1n1) 00:25:46.631 Could not set queue depth (nvme2n1) 00:25:46.631 Could not set queue depth (nvme3n1) 00:25:46.631 Could not set queue depth (nvme4n1) 00:25:46.631 Could not set queue depth (nvme5n1) 00:25:46.631 Could not set queue depth (nvme6n1) 00:25:46.631 Could not set queue depth (nvme7n1) 00:25:46.631 Could not set queue depth (nvme8n1) 00:25:46.631 Could not set queue depth (nvme9n1) 00:25:46.631 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.631 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.631 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.631 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.631 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.631 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.631 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.631 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.631 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.631 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.631 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:46.631 fio-3.35 00:25:46.631 Starting 11 threads 00:25:56.611 00:25:56.611 job0: (groupid=0, jobs=1): err= 0: pid=1059780: Sat Dec 14 16:39:26 2024 00:25:56.611 write: IOPS=332, BW=83.0MiB/s (87.0MB/s)(839MiB/10101msec); 0 zone resets 00:25:56.611 slat (usec): min=22, max=97788, avg=2017.99, stdev=6068.76 00:25:56.611 clat (usec): min=1652, max=656962, avg=190660.57, stdev=131386.37 00:25:56.611 lat (usec): min=1703, max=657015, avg=192678.56, stdev=133049.07 00:25:56.611 clat percentiles (msec): 00:25:56.611 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 40], 20.00th=[ 80], 00:25:56.612 | 30.00th=[ 120], 40.00th=[ 144], 50.00th=[ 157], 60.00th=[ 188], 00:25:56.612 | 70.00th=[ 228], 80.00th=[ 300], 90.00th=[ 372], 95.00th=[ 460], 00:25:56.612 | 99.00th=[ 575], 99.50th=[ 625], 99.90th=[ 651], 99.95th=[ 659], 00:25:56.612 | 99.99th=[ 659] 00:25:56.612 bw ( KiB/s): min=26624, max=133120, per=7.46%, avg=84249.60, stdev=32892.18, samples=20 00:25:56.612 iops : min= 104, max= 520, avg=329.10, stdev=128.49, samples=20 00:25:56.612 lat (msec) : 2=0.03%, 4=0.92%, 10=1.64%, 20=2.18%, 50=8.29% 00:25:56.612 lat (msec) : 100=12.13%, 250=48.60%, 500=23.11%, 750=3.10% 00:25:56.612 cpu : usr=0.86%, sys=1.05%, ctx=2006, majf=0, minf=1 00:25:56.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.612 issued rwts: total=0,3354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.612 job1: (groupid=0, jobs=1): err= 0: pid=1059794: Sat Dec 14 16:39:26 2024 00:25:56.612 write: IOPS=399, BW=100.0MiB/s (105MB/s)(1015MiB/10148msec); 0 zone resets 00:25:56.612 slat (usec): min=29, max=190930, avg=1629.10, stdev=5880.95 00:25:56.612 clat (msec): min=2, max=746, avg=158.35, stdev=123.83 00:25:56.612 lat (msec): min=2, max=754, avg=159.98, stdev=125.22 00:25:56.612 clat percentiles (msec): 00:25:56.612 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 43], 20.00th=[ 75], 00:25:56.612 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 123], 60.00th=[ 140], 00:25:56.612 | 70.00th=[ 180], 80.00th=[ 249], 90.00th=[ 321], 95.00th=[ 388], 00:25:56.612 | 99.00th=[ 600], 99.50th=[ 676], 99.90th=[ 735], 99.95th=[ 743], 00:25:56.612 | 99.99th=[ 743] 00:25:56.612 bw ( KiB/s): min=32256, max=198144, per=9.06%, avg=102246.40, stdev=53027.59, samples=20 00:25:56.612 iops : min= 126, max= 774, avg=399.40, stdev=207.14, samples=20 00:25:56.612 lat (msec) : 4=0.07%, 10=0.69%, 20=0.81%, 50=10.55%, 100=31.62% 00:25:56.612 lat (msec) : 250=36.50%, 500=16.90%, 750=2.86% 00:25:56.612 cpu : usr=0.88%, sys=1.37%, ctx=2527, majf=0, minf=1 00:25:56.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.612 issued rwts: total=0,4058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.612 job2: (groupid=0, jobs=1): err= 0: pid=1059796: Sat Dec 14 16:39:26 2024 00:25:56.612 write: IOPS=709, BW=177MiB/s (186MB/s)(1801MiB/10147msec); 0 zone resets 00:25:56.612 slat (usec): min=26, max=60153, avg=1291.01, stdev=3010.79 00:25:56.612 clat (usec): min=1306, max=367029, avg=88816.28, stdev=55063.04 00:25:56.612 lat (usec): min=1364, max=367079, avg=90107.29, stdev=55796.55 00:25:56.612 clat percentiles (msec): 00:25:56.612 | 1.00th=[ 11], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 43], 00:25:56.612 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 85], 60.00th=[ 96], 00:25:56.612 | 70.00th=[ 107], 80.00th=[ 116], 90.00th=[ 169], 95.00th=[ 199], 00:25:56.612 | 99.00th=[ 275], 99.50th=[ 292], 99.90th=[ 347], 99.95th=[ 355], 00:25:56.612 | 99.99th=[ 368] 00:25:56.612 bw ( KiB/s): min=61440, max=394240, per=16.19%, avg=182784.00, stdev=94492.86, samples=20 00:25:56.612 iops : min= 240, max= 1540, avg=714.00, stdev=369.11, samples=20 00:25:56.612 lat (msec) : 2=0.06%, 4=0.31%, 10=0.60%, 20=1.08%, 50=34.24% 00:25:56.612 lat (msec) : 100=26.18%, 250=35.75%, 500=1.79% 00:25:56.612 cpu : usr=1.55%, sys=2.15%, ctx=2389, majf=0, minf=1 00:25:56.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.612 issued rwts: total=0,7203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.612 job3: (groupid=0, jobs=1): err= 0: pid=1059797: Sat Dec 14 16:39:26 2024 00:25:56.612 write: IOPS=225, BW=56.4MiB/s (59.1MB/s)(571MiB/10120msec); 0 zone resets 00:25:56.612 slat (usec): min=23, max=113056, avg=3266.60, stdev=8471.36 00:25:56.612 clat (msec): min=7, max=593, avg=280.32, stdev=135.08 00:25:56.612 lat (msec): min=7, max=593, avg=283.58, stdev=136.89 00:25:56.612 clat percentiles (msec): 00:25:56.612 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 83], 20.00th=[ 171], 00:25:56.612 | 30.00th=[ 194], 40.00th=[ 251], 50.00th=[ 296], 60.00th=[ 330], 00:25:56.612 | 70.00th=[ 351], 80.00th=[ 380], 90.00th=[ 464], 95.00th=[ 510], 00:25:56.612 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 592], 99.95th=[ 592], 00:25:56.612 | 99.99th=[ 592] 00:25:56.612 bw ( KiB/s): min=28672, max=111104, per=5.03%, avg=56832.00, stdev=24097.52, samples=20 00:25:56.612 iops : min= 112, max= 434, avg=222.00, stdev=94.13, samples=20 00:25:56.612 lat (msec) : 10=0.13%, 20=1.53%, 50=5.65%, 100=4.20%, 250=28.60% 00:25:56.612 lat (msec) : 500=54.23%, 750=5.65% 00:25:56.612 cpu : usr=0.54%, sys=0.80%, ctx=1175, majf=0, minf=1 00:25:56.612 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:25:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.612 issued rwts: total=0,2283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.612 job4: (groupid=0, jobs=1): err= 0: pid=1059798: Sat Dec 14 16:39:26 2024 00:25:56.612 write: IOPS=246, BW=61.6MiB/s (64.6MB/s)(624MiB/10122msec); 0 zone resets 00:25:56.612 slat (usec): min=20, max=102297, avg=3360.07, stdev=8555.79 00:25:56.612 clat (usec): min=1466, max=613288, avg=256281.35, stdev=146914.02 00:25:56.612 lat (usec): min=1521, max=613352, avg=259641.42, stdev=149122.98 00:25:56.612 clat percentiles (msec): 00:25:56.612 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 72], 20.00th=[ 128], 00:25:56.612 | 30.00th=[ 161], 40.00th=[ 194], 50.00th=[ 234], 60.00th=[ 292], 00:25:56.612 | 70.00th=[ 338], 80.00th=[ 380], 90.00th=[ 460], 95.00th=[ 542], 00:25:56.612 | 99.00th=[ 584], 99.50th=[ 592], 99.90th=[ 617], 99.95th=[ 617], 00:25:56.612 | 99.99th=[ 617] 00:25:56.612 bw ( KiB/s): min=26624, max=116736, per=5.51%, avg=62241.80, stdev=28132.23, samples=20 00:25:56.612 iops : min= 104, max= 456, avg=243.10, stdev=109.87, samples=20 00:25:56.612 lat (msec) : 2=0.08%, 4=0.16%, 10=1.00%, 20=1.76%, 50=4.41% 00:25:56.612 lat (msec) : 100=6.94%, 250=38.65%, 500=39.90%, 750=7.10% 00:25:56.612 cpu : usr=0.63%, sys=0.80%, ctx=1130, majf=0, minf=1 00:25:56.612 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:25:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.612 issued rwts: total=0,2494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.612 job5: (groupid=0, jobs=1): err= 0: pid=1059799: Sat Dec 14 16:39:26 2024 00:25:56.612 write: IOPS=483, BW=121MiB/s (127MB/s)(1224MiB/10123msec); 0 zone resets 00:25:56.612 slat (usec): min=24, max=102923, avg=1845.25, stdev=5797.16 00:25:56.612 clat (usec): min=744, max=609168, avg=130404.75, stdev=138594.58 00:25:56.612 lat (usec): min=779, max=620720, avg=132250.00, stdev=140609.71 00:25:56.612 clat percentiles (msec): 00:25:56.612 | 1.00th=[ 5], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 41], 00:25:56.612 | 30.00th=[ 44], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 75], 00:25:56.612 | 70.00th=[ 136], 80.00th=[ 232], 90.00th=[ 347], 95.00th=[ 468], 00:25:56.612 | 99.00th=[ 567], 99.50th=[ 584], 99.90th=[ 609], 99.95th=[ 609], 00:25:56.612 | 99.99th=[ 609] 00:25:56.612 bw ( KiB/s): min=26624, max=399872, per=10.96%, avg=123673.60, stdev=114625.72, samples=20 00:25:56.612 iops : min= 104, max= 1562, avg=483.10, stdev=447.76, samples=20 00:25:56.612 lat (usec) : 750=0.02%, 1000=0.12% 00:25:56.612 lat (msec) : 2=0.39%, 4=0.39%, 10=0.84%, 20=1.21%, 50=35.55% 00:25:56.612 lat (msec) : 100=27.95%, 250=16.00%, 500=14.24%, 750=3.31% 00:25:56.612 cpu : usr=1.17%, sys=1.56%, ctx=1659, majf=0, minf=1 00:25:56.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.612 issued rwts: total=0,4895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.612 job6: (groupid=0, jobs=1): err= 0: pid=1059800: Sat Dec 14 16:39:26 2024 00:25:56.612 write: IOPS=343, BW=85.8MiB/s (89.9MB/s)(866MiB/10097msec); 0 zone resets 00:25:56.612 slat (usec): min=28, max=162650, avg=2546.35, stdev=7512.38 00:25:56.612 clat (msec): min=8, max=704, avg=183.38, stdev=120.32 00:25:56.612 lat (msec): min=8, max=704, avg=185.93, stdev=121.86 00:25:56.612 clat percentiles (msec): 00:25:56.612 | 1.00th=[ 51], 5.00th=[ 77], 10.00th=[ 88], 20.00th=[ 100], 00:25:56.612 | 30.00th=[ 126], 40.00th=[ 138], 50.00th=[ 146], 60.00th=[ 163], 00:25:56.612 | 70.00th=[ 188], 80.00th=[ 226], 90.00th=[ 296], 95.00th=[ 510], 00:25:56.612 | 99.00th=[ 617], 99.50th=[ 634], 99.90th=[ 667], 99.95th=[ 701], 00:25:56.612 | 99.99th=[ 701] 00:25:56.612 bw ( KiB/s): min=26624, max=180736, per=7.71%, avg=87065.60, stdev=43523.48, samples=20 00:25:56.612 iops : min= 104, max= 706, avg=340.10, stdev=170.01, samples=20 00:25:56.612 lat (msec) : 10=0.03%, 50=0.98%, 100=19.57%, 250=62.93%, 500=11.06% 00:25:56.612 lat (msec) : 750=5.43% 00:25:56.612 cpu : usr=0.73%, sys=1.16%, ctx=1149, majf=0, minf=1 00:25:56.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:25:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.612 issued rwts: total=0,3464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.612 job7: (groupid=0, jobs=1): err= 0: pid=1059801: Sat Dec 14 16:39:26 2024 00:25:56.612 write: IOPS=303, BW=76.0MiB/s (79.7MB/s)(771MiB/10142msec); 0 zone resets 00:25:56.612 slat (usec): min=13, max=114187, avg=3076.70, stdev=8435.45 00:25:56.612 clat (msec): min=4, max=649, avg=207.45, stdev=155.07 00:25:56.612 lat (msec): min=4, max=649, avg=210.53, stdev=157.28 00:25:56.612 clat percentiles (msec): 00:25:56.612 | 1.00th=[ 14], 5.00th=[ 45], 10.00th=[ 59], 20.00th=[ 73], 00:25:56.612 | 30.00th=[ 91], 40.00th=[ 111], 50.00th=[ 169], 60.00th=[ 203], 00:25:56.612 | 70.00th=[ 262], 80.00th=[ 368], 90.00th=[ 447], 95.00th=[ 510], 00:25:56.612 | 99.00th=[ 617], 99.50th=[ 634], 99.90th=[ 651], 99.95th=[ 651], 00:25:56.612 | 99.99th=[ 651] 00:25:56.612 bw ( KiB/s): min=24576, max=253434, per=6.85%, avg=77311.70, stdev=56811.72, samples=20 00:25:56.612 iops : min= 96, max= 989, avg=301.95, stdev=221.76, samples=20 00:25:56.612 lat (msec) : 10=0.42%, 20=1.56%, 50=4.67%, 100=28.36%, 250=33.71% 00:25:56.613 lat (msec) : 500=25.28%, 750=6.00% 00:25:56.613 cpu : usr=0.70%, sys=1.04%, ctx=993, majf=0, minf=2 00:25:56.613 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:56.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.613 issued rwts: total=0,3082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.613 job8: (groupid=0, jobs=1): err= 0: pid=1059808: Sat Dec 14 16:39:26 2024 00:25:56.613 write: IOPS=461, BW=115MiB/s (121MB/s)(1159MiB/10040msec); 0 zone resets 00:25:56.613 slat (usec): min=21, max=219280, avg=1927.65, stdev=6350.72 00:25:56.613 clat (msec): min=2, max=748, avg=136.70, stdev=138.08 00:25:56.613 lat (msec): min=2, max=748, avg=138.62, stdev=139.99 00:25:56.613 clat percentiles (msec): 00:25:56.613 | 1.00th=[ 10], 5.00th=[ 18], 10.00th=[ 34], 20.00th=[ 40], 00:25:56.613 | 30.00th=[ 48], 40.00th=[ 59], 50.00th=[ 68], 60.00th=[ 85], 00:25:56.613 | 70.00th=[ 169], 80.00th=[ 271], 90.00th=[ 342], 95.00th=[ 393], 00:25:56.613 | 99.00th=[ 567], 99.50th=[ 625], 99.90th=[ 726], 99.95th=[ 743], 00:25:56.613 | 99.99th=[ 751] 00:25:56.613 bw ( KiB/s): min=22528, max=357376, per=10.37%, avg=117017.60, stdev=99259.98, samples=20 00:25:56.613 iops : min= 88, max= 1396, avg=457.10, stdev=387.73, samples=20 00:25:56.613 lat (msec) : 4=0.04%, 10=1.08%, 20=4.55%, 50=26.56%, 100=32.46% 00:25:56.613 lat (msec) : 250=14.16%, 500=18.49%, 750=2.65% 00:25:56.613 cpu : usr=1.17%, sys=1.39%, ctx=1930, majf=0, minf=1 00:25:56.613 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:56.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.613 issued rwts: total=0,4634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.613 job9: (groupid=0, jobs=1): err= 0: pid=1059809: Sat Dec 14 16:39:26 2024 00:25:56.613 write: IOPS=390, BW=97.6MiB/s (102MB/s)(989MiB/10138msec); 0 zone resets 00:25:56.613 slat (usec): min=26, max=114399, avg=1567.26, stdev=5108.06 00:25:56.613 clat (usec): min=1339, max=593072, avg=162289.56, stdev=112755.54 00:25:56.613 lat (usec): min=1395, max=593132, avg=163856.82, stdev=113707.06 00:25:56.613 clat percentiles (msec): 00:25:56.613 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 38], 20.00th=[ 95], 00:25:56.613 | 30.00th=[ 104], 40.00th=[ 111], 50.00th=[ 118], 60.00th=[ 140], 00:25:56.613 | 70.00th=[ 192], 80.00th=[ 275], 90.00th=[ 351], 95.00th=[ 384], 00:25:56.613 | 99.00th=[ 451], 99.50th=[ 510], 99.90th=[ 584], 99.95th=[ 592], 00:25:56.613 | 99.99th=[ 592] 00:25:56.613 bw ( KiB/s): min=38912, max=180736, per=8.83%, avg=99686.40, stdev=44726.87, samples=20 00:25:56.613 iops : min= 152, max= 706, avg=389.40, stdev=174.71, samples=20 00:25:56.613 lat (msec) : 2=0.08%, 4=1.14%, 10=3.44%, 20=2.02%, 50=6.70% 00:25:56.613 lat (msec) : 100=9.88%, 250=53.70%, 500=22.49%, 750=0.56% 00:25:56.613 cpu : usr=0.84%, sys=1.28%, ctx=2198, majf=0, minf=1 00:25:56.613 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:56.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.613 issued rwts: total=0,3957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.613 job10: (groupid=0, jobs=1): err= 0: pid=1059810: Sat Dec 14 16:39:26 2024 00:25:56.613 write: IOPS=526, BW=132MiB/s (138MB/s)(1330MiB/10103msec); 0 zone resets 00:25:56.613 slat (usec): min=19, max=45985, avg=1429.18, stdev=3301.32 00:25:56.613 clat (usec): min=1121, max=292238, avg=120038.51, stdev=56860.10 00:25:56.613 lat (usec): min=1163, max=292315, avg=121467.69, stdev=57327.63 00:25:56.613 clat percentiles (msec): 00:25:56.613 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 57], 20.00th=[ 64], 00:25:56.613 | 30.00th=[ 88], 40.00th=[ 102], 50.00th=[ 122], 60.00th=[ 138], 00:25:56.613 | 70.00th=[ 146], 80.00th=[ 161], 90.00th=[ 197], 95.00th=[ 224], 00:25:56.613 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 288], 99.95th=[ 288], 00:25:56.613 | 99.99th=[ 292] 00:25:56.613 bw ( KiB/s): min=72192, max=251392, per=11.93%, avg=134616.05, stdev=55204.64, samples=20 00:25:56.613 iops : min= 282, max= 982, avg=525.80, stdev=215.66, samples=20 00:25:56.613 lat (msec) : 2=0.24%, 4=0.60%, 10=1.33%, 20=1.17%, 50=5.45% 00:25:56.613 lat (msec) : 100=30.86%, 250=58.47%, 500=1.88% 00:25:56.613 cpu : usr=0.98%, sys=1.71%, ctx=2374, majf=0, minf=1 00:25:56.613 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:56.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:56.613 issued rwts: total=0,5321,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:56.613 00:25:56.613 Run status group 0 (all jobs): 00:25:56.613 WRITE: bw=1102MiB/s (1156MB/s), 56.4MiB/s-177MiB/s (59.1MB/s-186MB/s), io=10.9GiB (11.7GB), run=10040-10148msec 00:25:56.613 00:25:56.613 Disk stats (read/write): 00:25:56.613 nvme0n1: ios=49/6520, merge=0/0, ticks=54/1216441, in_queue=1216495, util=95.18% 00:25:56.613 nvme10n1: ios=29/7939, merge=0/0, ticks=401/1222752, in_queue=1223153, util=98.87% 00:25:56.613 nvme1n1: ios=41/14250, merge=0/0, ticks=1178/1204719, in_queue=1205897, util=100.00% 00:25:56.613 nvme2n1: ios=0/4403, merge=0/0, ticks=0/1215897, in_queue=1215897, util=96.23% 00:25:56.613 nvme3n1: ios=0/4821, merge=0/0, ticks=0/1210728, in_queue=1210728, util=96.42% 00:25:56.613 nvme4n1: ios=40/9620, merge=0/0, ticks=1931/1203187, in_queue=1205118, util=99.86% 00:25:56.613 nvme5n1: ios=43/6740, merge=0/0, ticks=3636/1181899, in_queue=1185535, util=100.00% 00:25:56.613 nvme6n1: ios=44/6007, merge=0/0, ticks=3319/1193902, in_queue=1197221, util=100.00% 00:25:56.613 nvme7n1: ios=0/8989, merge=0/0, ticks=0/1217521, in_queue=1217521, util=98.73% 00:25:56.613 nvme8n1: ios=34/7725, merge=0/0, ticks=1827/1222925, in_queue=1224752, util=99.87% 00:25:56.613 nvme9n1: ios=0/10453, merge=0/0, ticks=0/1212724, in_queue=1212724, util=99.08% 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:56.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:56.613 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.613 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:56.872 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:56.873 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:56.873 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:56.873 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:56.873 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:25:56.873 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:56.873 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:25:56.873 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:56.873 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:56.873 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.873 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.132 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.132 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.132 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:57.132 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:57.132 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:57.132 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:57.132 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:57.132 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:57.391 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:57.391 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:57.650 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.650 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:57.909 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.909 16:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:58.167 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:58.167 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:58.167 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:58.167 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:58.168 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:25:58.168 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:58.426 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.426 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:58.685 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:58.685 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:58.686 rmmod nvme_tcp 00:25:58.686 rmmod nvme_fabrics 00:25:58.686 rmmod nvme_keyring 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 1051822 ']' 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 1051822 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 1051822 ']' 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 1051822 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1051822 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1051822' 00:25:58.686 killing process with pid 1051822 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 1051822 00:25:58.686 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 1051822 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.255 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.160 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:01.160 00:26:01.160 real 1m10.933s 00:26:01.160 user 4m17.039s 00:26:01.160 sys 0m17.591s 00:26:01.160 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.160 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.160 ************************************ 00:26:01.160 END TEST nvmf_multiconnection 00:26:01.160 ************************************ 00:26:01.160 16:39:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:01.160 16:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:01.160 16:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.160 16:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:01.420 ************************************ 00:26:01.420 START TEST nvmf_initiator_timeout 00:26:01.420 ************************************ 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:01.420 * Looking for test storage... 00:26:01.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:01.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.420 --rc genhtml_branch_coverage=1 00:26:01.420 --rc genhtml_function_coverage=1 00:26:01.420 --rc genhtml_legend=1 00:26:01.420 --rc geninfo_all_blocks=1 00:26:01.420 --rc geninfo_unexecuted_blocks=1 00:26:01.420 00:26:01.420 ' 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:01.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.420 --rc genhtml_branch_coverage=1 00:26:01.420 --rc genhtml_function_coverage=1 00:26:01.420 --rc genhtml_legend=1 00:26:01.420 --rc geninfo_all_blocks=1 00:26:01.420 --rc geninfo_unexecuted_blocks=1 00:26:01.420 00:26:01.420 ' 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:01.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.420 --rc genhtml_branch_coverage=1 00:26:01.420 --rc genhtml_function_coverage=1 00:26:01.420 --rc genhtml_legend=1 00:26:01.420 --rc geninfo_all_blocks=1 00:26:01.420 --rc geninfo_unexecuted_blocks=1 00:26:01.420 00:26:01.420 ' 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:01.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.420 --rc genhtml_branch_coverage=1 00:26:01.420 --rc genhtml_function_coverage=1 00:26:01.420 --rc genhtml_legend=1 00:26:01.420 --rc geninfo_all_blocks=1 00:26:01.420 --rc geninfo_unexecuted_blocks=1 00:26:01.420 00:26:01.420 ' 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:01.420 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:01.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:01.421 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:07.992 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:07.992 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:07.992 Found net devices under 0000:af:00.0: cvl_0_0 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:07.992 Found net devices under 0000:af:00.1: cvl_0_1 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:07.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:26:07.993 00:26:07.993 --- 10.0.0.2 ping statistics --- 00:26:07.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.993 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:26:07.993 00:26:07.993 --- 10.0.0.1 ping statistics --- 00:26:07.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.993 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=1065049 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 1065049 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 1065049 ']' 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.993 [2024-12-14 16:39:37.410860] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:26:07.993 [2024-12-14 16:39:37.410911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.993 [2024-12-14 16:39:37.489806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:07.993 [2024-12-14 16:39:37.513743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.993 [2024-12-14 16:39:37.513782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.993 [2024-12-14 16:39:37.513789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.993 [2024-12-14 16:39:37.513795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.993 [2024-12-14 16:39:37.513800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.993 [2024-12-14 16:39:37.515266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.993 [2024-12-14 16:39:37.515376] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.993 [2024-12-14 16:39:37.515488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.993 [2024-12-14 16:39:37.515490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.993 Malloc0 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.993 Delay0 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.993 [2024-12-14 16:39:37.697361] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.993 [2024-12-14 16:39:37.730712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.993 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:08.930 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:08.930 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:08.930 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.930 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:08.930 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:10.835 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:10.835 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:10.835 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:10.835 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:10.835 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.835 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:10.835 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1065612 00:26:10.835 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:10.835 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:10.835 [global] 00:26:10.835 thread=1 00:26:10.835 invalidate=1 00:26:10.835 rw=write 00:26:10.835 time_based=1 00:26:10.835 runtime=60 00:26:10.835 ioengine=libaio 00:26:10.835 direct=1 00:26:10.835 bs=4096 00:26:10.835 iodepth=1 00:26:10.835 norandommap=0 00:26:10.835 numjobs=1 00:26:10.835 00:26:10.835 verify_dump=1 00:26:10.835 verify_backlog=512 00:26:10.835 verify_state_save=0 00:26:10.835 do_verify=1 00:26:10.835 verify=crc32c-intel 00:26:10.835 [job0] 00:26:10.835 filename=/dev/nvme0n1 00:26:10.835 Could not set queue depth (nvme0n1) 00:26:11.094 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:11.094 fio-3.35 00:26:11.094 Starting 1 thread 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.379 true 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.379 true 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.379 true 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.379 true 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.379 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:16.909 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:16.909 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.909 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:16.909 true 00:26:16.909 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.909 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:16.909 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.909 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:16.909 true 00:26:16.909 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.909 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:16.909 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.910 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:16.910 true 00:26:16.910 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.910 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:16.910 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.910 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:16.910 true 00:26:16.910 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.910 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:16.910 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1065612 00:27:13.133 00:27:13.133 job0: (groupid=0, jobs=1): err= 0: pid=1065731: Sat Dec 14 16:40:41 2024 00:27:13.133 read: IOPS=308, BW=1234KiB/s (1263kB/s)(72.3MiB/60029msec) 00:27:13.133 slat (usec): min=6, max=10243, avg= 7.99, stdev=75.26 00:27:13.133 clat (usec): min=189, max=41561k, avg=3034.02, stdev=305460.23 00:27:13.133 lat (usec): min=197, max=41561k, avg=3042.01, stdev=305460.38 00:27:13.133 clat percentiles (usec): 00:27:13.133 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:27:13.133 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:27:13.133 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 281], 00:27:13.133 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:27:13.133 | 99.99th=[42206] 00:27:13.133 write: IOPS=315, BW=1262KiB/s (1293kB/s)(74.0MiB/60029msec); 0 zone resets 00:27:13.133 slat (usec): min=9, max=27268, avg=12.03, stdev=198.05 00:27:13.133 clat (usec): min=136, max=425, avg=178.91, stdev=16.25 00:27:13.133 lat (usec): min=151, max=27667, avg=190.94, stdev=200.31 00:27:13.133 clat percentiles (usec): 00:27:13.133 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:27:13.133 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:27:13.133 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:27:13.133 | 99.00th=[ 241], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 306], 00:27:13.133 | 99.99th=[ 400] 00:27:13.133 bw ( KiB/s): min= 880, max=10456, per=100.00%, avg=7976.42, stdev=2520.03, samples=19 00:27:13.133 iops : min= 220, max= 2614, avg=1994.11, stdev=630.01, samples=19 00:27:13.133 lat (usec) : 250=82.94%, 500=16.40% 00:27:13.133 lat (msec) : 50=0.66%, >=2000=0.01% 00:27:13.133 cpu : usr=0.31%, sys=0.60%, ctx=37469, majf=0, minf=1 00:27:13.133 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:13.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.133 issued rwts: total=18516,18944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.133 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:13.133 00:27:13.133 Run status group 0 (all jobs): 00:27:13.133 READ: bw=1234KiB/s (1263kB/s), 1234KiB/s-1234KiB/s (1263kB/s-1263kB/s), io=72.3MiB (75.8MB), run=60029-60029msec 00:27:13.133 WRITE: bw=1262KiB/s (1293kB/s), 1262KiB/s-1262KiB/s (1293kB/s-1293kB/s), io=74.0MiB (77.6MB), run=60029-60029msec 00:27:13.133 00:27:13.133 Disk stats (read/write): 00:27:13.133 nvme0n1: ios=18614/18944, merge=0/0, ticks=15562/3272, in_queue=18834, util=100.00% 00:27:13.133 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:13.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:13.134 nvmf hotplug test: fio successful as expected 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.134 rmmod nvme_tcp 00:27:13.134 rmmod nvme_fabrics 00:27:13.134 rmmod nvme_keyring 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 1065049 ']' 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 1065049 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 1065049 ']' 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 1065049 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1065049 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1065049' 00:27:13.134 killing process with pid 1065049 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 1065049 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 1065049 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.134 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.072 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:14.072 00:27:14.072 real 1m12.589s 00:27:14.072 user 4m22.398s 00:27:14.072 sys 0m7.115s 00:27:14.072 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.072 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:14.072 ************************************ 00:27:14.072 END TEST nvmf_initiator_timeout 00:27:14.072 ************************************ 00:27:14.072 16:40:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:14.072 16:40:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:14.072 16:40:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:14.072 16:40:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:14.072 16:40:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:19.348 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.348 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.349 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.349 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:19.349 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:19.349 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.349 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.349 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.349 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.349 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.349 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:19.609 Found net devices under 0000:af:00.0: cvl_0_0 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:19.609 Found net devices under 0000:af:00.1: cvl_0_1 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:19.609 ************************************ 00:27:19.609 START TEST nvmf_perf_adq 00:27:19.609 ************************************ 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:19.609 * Looking for test storage... 00:27:19.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.609 --rc genhtml_branch_coverage=1 00:27:19.609 --rc genhtml_function_coverage=1 00:27:19.609 --rc genhtml_legend=1 00:27:19.609 --rc geninfo_all_blocks=1 00:27:19.609 --rc geninfo_unexecuted_blocks=1 00:27:19.609 00:27:19.609 ' 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.609 --rc genhtml_branch_coverage=1 00:27:19.609 --rc genhtml_function_coverage=1 00:27:19.609 --rc genhtml_legend=1 00:27:19.609 --rc geninfo_all_blocks=1 00:27:19.609 --rc geninfo_unexecuted_blocks=1 00:27:19.609 00:27:19.609 ' 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.609 --rc genhtml_branch_coverage=1 00:27:19.609 --rc genhtml_function_coverage=1 00:27:19.609 --rc genhtml_legend=1 00:27:19.609 --rc geninfo_all_blocks=1 00:27:19.609 --rc geninfo_unexecuted_blocks=1 00:27:19.609 00:27:19.609 ' 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:19.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.609 --rc genhtml_branch_coverage=1 00:27:19.609 --rc genhtml_function_coverage=1 00:27:19.609 --rc genhtml_legend=1 00:27:19.609 --rc geninfo_all_blocks=1 00:27:19.609 --rc geninfo_unexecuted_blocks=1 00:27:19.609 00:27:19.609 ' 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:19.609 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:19.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:19.610 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.182 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:26.183 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:26.183 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:26.183 Found net devices under 0000:af:00.0: cvl_0_0 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:26.183 Found net devices under 0000:af:00.1: cvl_0_1 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:26.183 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:26.443 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:28.976 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.255 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:34.256 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:34.256 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:34.256 Found net devices under 0000:af:00.0: cvl_0_0 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:34.256 Found net devices under 0000:af:00.1: cvl_0_1 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.256 16:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.256 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.256 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.256 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:34.256 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.256 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.256 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.256 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:34.256 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:34.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:27:34.257 00:27:34.257 --- 10.0.0.2 ping statistics --- 00:27:34.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.257 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:27:34.257 00:27:34.257 --- 10.0.0.1 ping statistics --- 00:27:34.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.257 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1083361 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1083361 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1083361 ']' 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.257 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.257 [2024-12-14 16:41:04.285574] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:27:34.257 [2024-12-14 16:41:04.285614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.517 [2024-12-14 16:41:04.363782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:34.517 [2024-12-14 16:41:04.386327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.517 [2024-12-14 16:41:04.386368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.517 [2024-12-14 16:41:04.386375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.517 [2024-12-14 16:41:04.386382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.517 [2024-12-14 16:41:04.386386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.517 [2024-12-14 16:41:04.387842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.517 [2024-12-14 16:41:04.387951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.517 [2024-12-14 16:41:04.388037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.517 [2024-12-14 16:41:04.388038] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.517 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.776 [2024-12-14 16:41:04.616993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.776 Malloc1 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.776 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.777 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.777 [2024-12-14 16:41:04.683425] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.777 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.777 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1083476 00:27:34.777 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:34.777 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:36.682 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:36.682 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.682 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.682 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.682 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:36.682 "tick_rate": 2100000000, 00:27:36.682 "poll_groups": [ 00:27:36.682 { 00:27:36.682 "name": "nvmf_tgt_poll_group_000", 00:27:36.682 "admin_qpairs": 1, 00:27:36.682 "io_qpairs": 1, 00:27:36.682 "current_admin_qpairs": 1, 00:27:36.682 "current_io_qpairs": 1, 00:27:36.682 "pending_bdev_io": 0, 00:27:36.682 "completed_nvme_io": 19453, 00:27:36.682 "transports": [ 00:27:36.682 { 00:27:36.682 "trtype": "TCP" 00:27:36.682 } 00:27:36.682 ] 00:27:36.682 }, 00:27:36.682 { 00:27:36.682 "name": "nvmf_tgt_poll_group_001", 00:27:36.682 "admin_qpairs": 0, 00:27:36.682 "io_qpairs": 1, 00:27:36.682 "current_admin_qpairs": 0, 00:27:36.682 "current_io_qpairs": 1, 00:27:36.682 "pending_bdev_io": 0, 00:27:36.682 "completed_nvme_io": 19437, 00:27:36.682 "transports": [ 00:27:36.682 { 00:27:36.682 "trtype": "TCP" 00:27:36.682 } 00:27:36.682 ] 00:27:36.682 }, 00:27:36.682 { 00:27:36.682 "name": "nvmf_tgt_poll_group_002", 00:27:36.682 "admin_qpairs": 0, 00:27:36.682 "io_qpairs": 1, 00:27:36.682 "current_admin_qpairs": 0, 00:27:36.682 "current_io_qpairs": 1, 00:27:36.682 "pending_bdev_io": 0, 00:27:36.682 "completed_nvme_io": 19669, 00:27:36.682 "transports": [ 00:27:36.682 { 00:27:36.682 "trtype": "TCP" 00:27:36.682 } 00:27:36.682 ] 00:27:36.682 }, 00:27:36.682 { 00:27:36.682 "name": "nvmf_tgt_poll_group_003", 00:27:36.682 "admin_qpairs": 0, 00:27:36.682 "io_qpairs": 1, 00:27:36.682 "current_admin_qpairs": 0, 00:27:36.682 "current_io_qpairs": 1, 00:27:36.682 "pending_bdev_io": 0, 00:27:36.682 "completed_nvme_io": 19384, 00:27:36.682 "transports": [ 00:27:36.682 { 00:27:36.682 "trtype": "TCP" 00:27:36.682 } 00:27:36.682 ] 00:27:36.682 } 00:27:36.682 ] 00:27:36.682 }' 00:27:36.682 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:36.682 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:36.682 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:36.682 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:36.682 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1083476 00:27:44.805 Initializing NVMe Controllers 00:27:44.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:44.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:44.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:44.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:44.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:44.805 Initialization complete. Launching workers. 00:27:44.805 ======================================================== 00:27:44.805 Latency(us) 00:27:44.805 Device Information : IOPS MiB/s Average min max 00:27:44.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10249.97 40.04 6243.77 2366.88 11144.09 00:27:44.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10309.47 40.27 6209.03 2134.82 9941.42 00:27:44.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10445.27 40.80 6126.68 2398.28 10322.50 00:27:44.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10330.07 40.35 6197.09 2282.34 10505.48 00:27:44.805 ======================================================== 00:27:44.805 Total : 41334.79 161.46 6193.85 2134.82 11144.09 00:27:44.805 00:27:44.805 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:44.805 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:44.805 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:44.805 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:44.805 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:44.805 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:44.805 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:44.805 rmmod nvme_tcp 00:27:44.805 rmmod nvme_fabrics 00:27:45.065 rmmod nvme_keyring 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1083361 ']' 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1083361 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1083361 ']' 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1083361 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1083361 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1083361' 00:27:45.065 killing process with pid 1083361 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1083361 00:27:45.065 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1083361 00:27:45.065 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:45.065 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:45.065 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:45.065 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:45.065 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:45.065 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:45.065 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:45.324 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:45.324 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:45.324 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.324 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.324 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.231 16:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:47.231 16:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:47.231 16:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:47.231 16:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:48.610 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:51.146 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:56.427 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:56.427 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:56.427 Found net devices under 0000:af:00.0: cvl_0_0 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:56.427 Found net devices under 0000:af:00.1: cvl_0_1 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:56.427 16:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:56.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:27:56.428 00:27:56.428 --- 10.0.0.2 ping statistics --- 00:27:56.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.428 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:27:56.428 00:27:56.428 --- 10.0.0.1 ping statistics --- 00:27:56.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.428 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:56.428 net.core.busy_poll = 1 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:56.428 net.core.busy_read = 1 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:56.428 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:56.687 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1087279 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1087279 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1087279 ']' 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.688 [2024-12-14 16:41:26.598393] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:27:56.688 [2024-12-14 16:41:26.598438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.688 [2024-12-14 16:41:26.677152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.688 [2024-12-14 16:41:26.700058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.688 [2024-12-14 16:41:26.700093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.688 [2024-12-14 16:41:26.700101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.688 [2024-12-14 16:41:26.700107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.688 [2024-12-14 16:41:26.700112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.688 [2024-12-14 16:41:26.701566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.688 [2024-12-14 16:41:26.701665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.688 [2024-12-14 16:41:26.701699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.688 [2024-12-14 16:41:26.701701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:56.688 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 [2024-12-14 16:41:26.913801] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 Malloc1 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 [2024-12-14 16:41:26.978185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1087456 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:56.947 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:59.485 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:27:59.485 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.485 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.485 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.485 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:27:59.485 "tick_rate": 2100000000, 00:27:59.485 "poll_groups": [ 00:27:59.485 { 00:27:59.485 "name": "nvmf_tgt_poll_group_000", 00:27:59.485 "admin_qpairs": 1, 00:27:59.485 "io_qpairs": 1, 00:27:59.485 "current_admin_qpairs": 1, 00:27:59.485 "current_io_qpairs": 1, 00:27:59.485 "pending_bdev_io": 0, 00:27:59.485 "completed_nvme_io": 27894, 00:27:59.485 "transports": [ 00:27:59.485 { 00:27:59.485 "trtype": "TCP" 00:27:59.485 } 00:27:59.485 ] 00:27:59.485 }, 00:27:59.485 { 00:27:59.485 "name": "nvmf_tgt_poll_group_001", 00:27:59.485 "admin_qpairs": 0, 00:27:59.485 "io_qpairs": 3, 00:27:59.485 "current_admin_qpairs": 0, 00:27:59.485 "current_io_qpairs": 3, 00:27:59.485 "pending_bdev_io": 0, 00:27:59.485 "completed_nvme_io": 28929, 00:27:59.485 "transports": [ 00:27:59.485 { 00:27:59.485 "trtype": "TCP" 00:27:59.485 } 00:27:59.485 ] 00:27:59.485 }, 00:27:59.485 { 00:27:59.485 "name": "nvmf_tgt_poll_group_002", 00:27:59.485 "admin_qpairs": 0, 00:27:59.485 "io_qpairs": 0, 00:27:59.485 "current_admin_qpairs": 0, 00:27:59.485 "current_io_qpairs": 0, 00:27:59.485 "pending_bdev_io": 0, 00:27:59.485 "completed_nvme_io": 0, 00:27:59.485 "transports": [ 00:27:59.485 { 00:27:59.485 "trtype": "TCP" 00:27:59.485 } 00:27:59.485 ] 00:27:59.485 }, 00:27:59.485 { 00:27:59.485 "name": "nvmf_tgt_poll_group_003", 00:27:59.485 "admin_qpairs": 0, 00:27:59.485 "io_qpairs": 0, 00:27:59.485 "current_admin_qpairs": 0, 00:27:59.485 "current_io_qpairs": 0, 00:27:59.485 "pending_bdev_io": 0, 00:27:59.485 "completed_nvme_io": 0, 00:27:59.485 "transports": [ 00:27:59.485 { 00:27:59.485 "trtype": "TCP" 00:27:59.485 } 00:27:59.485 ] 00:27:59.485 } 00:27:59.485 ] 00:27:59.485 }' 00:27:59.485 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:59.485 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:27:59.485 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:27:59.485 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:27:59.485 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1087456 00:28:07.608 Initializing NVMe Controllers 00:28:07.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:07.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:07.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:07.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:07.608 Initialization complete. Launching workers. 00:28:07.608 ======================================================== 00:28:07.608 Latency(us) 00:28:07.608 Device Information : IOPS MiB/s Average min max 00:28:07.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5255.20 20.53 12180.15 1662.34 59733.32 00:28:07.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15349.10 59.96 4169.29 1543.02 45002.35 00:28:07.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4486.80 17.53 14266.52 1903.67 59528.26 00:28:07.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5512.10 21.53 11634.25 1425.52 57692.10 00:28:07.608 ======================================================== 00:28:07.608 Total : 30603.20 119.54 8369.85 1425.52 59733.32 00:28:07.608 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:07.608 rmmod nvme_tcp 00:28:07.608 rmmod nvme_fabrics 00:28:07.608 rmmod nvme_keyring 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1087279 ']' 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1087279 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1087279 ']' 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1087279 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1087279 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1087279' 00:28:07.608 killing process with pid 1087279 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1087279 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1087279 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.608 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.516 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.516 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:09.516 00:28:09.516 real 0m50.102s 00:28:09.516 user 2m44.049s 00:28:09.516 sys 0m10.109s 00:28:09.516 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.516 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.516 ************************************ 00:28:09.516 END TEST nvmf_perf_adq 00:28:09.516 ************************************ 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:09.776 ************************************ 00:28:09.776 START TEST nvmf_shutdown 00:28:09.776 ************************************ 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:09.776 * Looking for test storage... 00:28:09.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.776 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:09.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.777 --rc genhtml_branch_coverage=1 00:28:09.777 --rc genhtml_function_coverage=1 00:28:09.777 --rc genhtml_legend=1 00:28:09.777 --rc geninfo_all_blocks=1 00:28:09.777 --rc geninfo_unexecuted_blocks=1 00:28:09.777 00:28:09.777 ' 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:09.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.777 --rc genhtml_branch_coverage=1 00:28:09.777 --rc genhtml_function_coverage=1 00:28:09.777 --rc genhtml_legend=1 00:28:09.777 --rc geninfo_all_blocks=1 00:28:09.777 --rc geninfo_unexecuted_blocks=1 00:28:09.777 00:28:09.777 ' 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:09.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.777 --rc genhtml_branch_coverage=1 00:28:09.777 --rc genhtml_function_coverage=1 00:28:09.777 --rc genhtml_legend=1 00:28:09.777 --rc geninfo_all_blocks=1 00:28:09.777 --rc geninfo_unexecuted_blocks=1 00:28:09.777 00:28:09.777 ' 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:09.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.777 --rc genhtml_branch_coverage=1 00:28:09.777 --rc genhtml_function_coverage=1 00:28:09.777 --rc genhtml_legend=1 00:28:09.777 --rc geninfo_all_blocks=1 00:28:09.777 --rc geninfo_unexecuted_blocks=1 00:28:09.777 00:28:09.777 ' 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.777 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:10.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:10.037 ************************************ 00:28:10.037 START TEST nvmf_shutdown_tc1 00:28:10.037 ************************************ 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:10.037 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:16.610 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:16.610 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.610 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:16.611 Found net devices under 0000:af:00.0: cvl_0_0 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:16.611 Found net devices under 0000:af:00.1: cvl_0_1 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:16.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:28:16.611 00:28:16.611 --- 10.0.0.2 ping statistics --- 00:28:16.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.611 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:28:16.611 00:28:16.611 --- 10.0.0.1 ping statistics --- 00:28:16.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.611 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1092636 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1092636 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1092636 ']' 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.611 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.611 [2024-12-14 16:41:46.011987] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:16.611 [2024-12-14 16:41:46.012033] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.611 [2024-12-14 16:41:46.092645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.611 [2024-12-14 16:41:46.114767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.611 [2024-12-14 16:41:46.114807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.611 [2024-12-14 16:41:46.114815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.611 [2024-12-14 16:41:46.114821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.612 [2024-12-14 16:41:46.114826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.612 [2024-12-14 16:41:46.116181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:16.612 [2024-12-14 16:41:46.116292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:16.612 [2024-12-14 16:41:46.116396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.612 [2024-12-14 16:41:46.116397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.612 [2024-12-14 16:41:46.255443] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.612 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.612 Malloc1 00:28:16.612 [2024-12-14 16:41:46.360043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.612 Malloc2 00:28:16.612 Malloc3 00:28:16.612 Malloc4 00:28:16.612 Malloc5 00:28:16.612 Malloc6 00:28:16.612 Malloc7 00:28:16.612 Malloc8 00:28:16.612 Malloc9 00:28:16.876 Malloc10 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1092753 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1092753 /var/tmp/bdevperf.sock 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1092753 ']' 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:16.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.876 { 00:28:16.876 "params": { 00:28:16.876 "name": "Nvme$subsystem", 00:28:16.876 "trtype": "$TEST_TRANSPORT", 00:28:16.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.876 "adrfam": "ipv4", 00:28:16.876 "trsvcid": "$NVMF_PORT", 00:28:16.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.876 "hdgst": ${hdgst:-false}, 00:28:16.876 "ddgst": ${ddgst:-false} 00:28:16.876 }, 00:28:16.876 "method": "bdev_nvme_attach_controller" 00:28:16.876 } 00:28:16.876 EOF 00:28:16.876 )") 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.876 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.876 { 00:28:16.876 "params": { 00:28:16.876 "name": "Nvme$subsystem", 00:28:16.876 "trtype": "$TEST_TRANSPORT", 00:28:16.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.876 "adrfam": "ipv4", 00:28:16.876 "trsvcid": "$NVMF_PORT", 00:28:16.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.876 "hdgst": ${hdgst:-false}, 00:28:16.876 "ddgst": ${ddgst:-false} 00:28:16.876 }, 00:28:16.877 "method": "bdev_nvme_attach_controller" 00:28:16.877 } 00:28:16.877 EOF 00:28:16.877 )") 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.877 { 00:28:16.877 "params": { 00:28:16.877 "name": "Nvme$subsystem", 00:28:16.877 "trtype": "$TEST_TRANSPORT", 00:28:16.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.877 "adrfam": "ipv4", 00:28:16.877 "trsvcid": "$NVMF_PORT", 00:28:16.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.877 "hdgst": ${hdgst:-false}, 00:28:16.877 "ddgst": ${ddgst:-false} 00:28:16.877 }, 00:28:16.877 "method": "bdev_nvme_attach_controller" 00:28:16.877 } 00:28:16.877 EOF 00:28:16.877 )") 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.877 { 00:28:16.877 "params": { 00:28:16.877 "name": "Nvme$subsystem", 00:28:16.877 "trtype": "$TEST_TRANSPORT", 00:28:16.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.877 "adrfam": "ipv4", 00:28:16.877 "trsvcid": "$NVMF_PORT", 00:28:16.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.877 "hdgst": ${hdgst:-false}, 00:28:16.877 "ddgst": ${ddgst:-false} 00:28:16.877 }, 00:28:16.877 "method": "bdev_nvme_attach_controller" 00:28:16.877 } 00:28:16.877 EOF 00:28:16.877 )") 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.877 { 00:28:16.877 "params": { 00:28:16.877 "name": "Nvme$subsystem", 00:28:16.877 "trtype": "$TEST_TRANSPORT", 00:28:16.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.877 "adrfam": "ipv4", 00:28:16.877 "trsvcid": "$NVMF_PORT", 00:28:16.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.877 "hdgst": ${hdgst:-false}, 00:28:16.877 "ddgst": ${ddgst:-false} 00:28:16.877 }, 00:28:16.877 "method": "bdev_nvme_attach_controller" 00:28:16.877 } 00:28:16.877 EOF 00:28:16.877 )") 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.877 { 00:28:16.877 "params": { 00:28:16.877 "name": "Nvme$subsystem", 00:28:16.877 "trtype": "$TEST_TRANSPORT", 00:28:16.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.877 "adrfam": "ipv4", 00:28:16.877 "trsvcid": "$NVMF_PORT", 00:28:16.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.877 "hdgst": ${hdgst:-false}, 00:28:16.877 "ddgst": ${ddgst:-false} 00:28:16.877 }, 00:28:16.877 "method": "bdev_nvme_attach_controller" 00:28:16.877 } 00:28:16.877 EOF 00:28:16.877 )") 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.877 { 00:28:16.877 "params": { 00:28:16.877 "name": "Nvme$subsystem", 00:28:16.877 "trtype": "$TEST_TRANSPORT", 00:28:16.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.877 "adrfam": "ipv4", 00:28:16.877 "trsvcid": "$NVMF_PORT", 00:28:16.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.877 "hdgst": ${hdgst:-false}, 00:28:16.877 "ddgst": ${ddgst:-false} 00:28:16.877 }, 00:28:16.877 "method": "bdev_nvme_attach_controller" 00:28:16.877 } 00:28:16.877 EOF 00:28:16.877 )") 00:28:16.877 [2024-12-14 16:41:46.829448] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:16.877 [2024-12-14 16:41:46.829498] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.877 { 00:28:16.877 "params": { 00:28:16.877 "name": "Nvme$subsystem", 00:28:16.877 "trtype": "$TEST_TRANSPORT", 00:28:16.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.877 "adrfam": "ipv4", 00:28:16.877 "trsvcid": "$NVMF_PORT", 00:28:16.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.877 "hdgst": ${hdgst:-false}, 00:28:16.877 "ddgst": ${ddgst:-false} 00:28:16.877 }, 00:28:16.877 "method": "bdev_nvme_attach_controller" 00:28:16.877 } 00:28:16.877 EOF 00:28:16.877 )") 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.877 { 00:28:16.877 "params": { 00:28:16.877 "name": "Nvme$subsystem", 00:28:16.877 "trtype": "$TEST_TRANSPORT", 00:28:16.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.877 "adrfam": "ipv4", 00:28:16.877 "trsvcid": "$NVMF_PORT", 00:28:16.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.877 "hdgst": ${hdgst:-false}, 00:28:16.877 "ddgst": ${ddgst:-false} 00:28:16.877 }, 00:28:16.877 "method": "bdev_nvme_attach_controller" 00:28:16.877 } 00:28:16.877 EOF 00:28:16.877 )") 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.877 { 00:28:16.877 "params": { 00:28:16.877 "name": "Nvme$subsystem", 00:28:16.877 "trtype": "$TEST_TRANSPORT", 00:28:16.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.877 "adrfam": "ipv4", 00:28:16.877 "trsvcid": "$NVMF_PORT", 00:28:16.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.877 "hdgst": ${hdgst:-false}, 00:28:16.877 "ddgst": ${ddgst:-false} 00:28:16.877 }, 00:28:16.877 "method": "bdev_nvme_attach_controller" 00:28:16.877 } 00:28:16.877 EOF 00:28:16.877 )") 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:16.877 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:16.877 "params": { 00:28:16.877 "name": "Nvme1", 00:28:16.877 "trtype": "tcp", 00:28:16.877 "traddr": "10.0.0.2", 00:28:16.877 "adrfam": "ipv4", 00:28:16.877 "trsvcid": "4420", 00:28:16.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:16.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:16.877 "hdgst": false, 00:28:16.877 "ddgst": false 00:28:16.878 }, 00:28:16.878 "method": "bdev_nvme_attach_controller" 00:28:16.878 },{ 00:28:16.878 "params": { 00:28:16.878 "name": "Nvme2", 00:28:16.878 "trtype": "tcp", 00:28:16.878 "traddr": "10.0.0.2", 00:28:16.878 "adrfam": "ipv4", 00:28:16.878 "trsvcid": "4420", 00:28:16.878 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:16.878 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:16.878 "hdgst": false, 00:28:16.878 "ddgst": false 00:28:16.878 }, 00:28:16.878 "method": "bdev_nvme_attach_controller" 00:28:16.878 },{ 00:28:16.878 "params": { 00:28:16.878 "name": "Nvme3", 00:28:16.878 "trtype": "tcp", 00:28:16.878 "traddr": "10.0.0.2", 00:28:16.878 "adrfam": "ipv4", 00:28:16.878 "trsvcid": "4420", 00:28:16.878 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:16.878 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:16.878 "hdgst": false, 00:28:16.878 "ddgst": false 00:28:16.878 }, 00:28:16.878 "method": "bdev_nvme_attach_controller" 00:28:16.878 },{ 00:28:16.878 "params": { 00:28:16.878 "name": "Nvme4", 00:28:16.878 "trtype": "tcp", 00:28:16.878 "traddr": "10.0.0.2", 00:28:16.878 "adrfam": "ipv4", 00:28:16.878 "trsvcid": "4420", 00:28:16.878 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:16.878 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:16.878 "hdgst": false, 00:28:16.878 "ddgst": false 00:28:16.878 }, 00:28:16.878 "method": "bdev_nvme_attach_controller" 00:28:16.878 },{ 00:28:16.878 "params": { 00:28:16.878 "name": "Nvme5", 00:28:16.878 "trtype": "tcp", 00:28:16.878 "traddr": "10.0.0.2", 00:28:16.878 "adrfam": "ipv4", 00:28:16.878 "trsvcid": "4420", 00:28:16.878 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:16.878 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:16.878 "hdgst": false, 00:28:16.878 "ddgst": false 00:28:16.878 }, 00:28:16.878 "method": "bdev_nvme_attach_controller" 00:28:16.878 },{ 00:28:16.878 "params": { 00:28:16.878 "name": "Nvme6", 00:28:16.878 "trtype": "tcp", 00:28:16.878 "traddr": "10.0.0.2", 00:28:16.878 "adrfam": "ipv4", 00:28:16.878 "trsvcid": "4420", 00:28:16.878 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:16.878 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:16.878 "hdgst": false, 00:28:16.878 "ddgst": false 00:28:16.878 }, 00:28:16.878 "method": "bdev_nvme_attach_controller" 00:28:16.878 },{ 00:28:16.878 "params": { 00:28:16.878 "name": "Nvme7", 00:28:16.878 "trtype": "tcp", 00:28:16.878 "traddr": "10.0.0.2", 00:28:16.878 "adrfam": "ipv4", 00:28:16.878 "trsvcid": "4420", 00:28:16.878 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:16.878 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:16.878 "hdgst": false, 00:28:16.878 "ddgst": false 00:28:16.878 }, 00:28:16.878 "method": "bdev_nvme_attach_controller" 00:28:16.878 },{ 00:28:16.878 "params": { 00:28:16.878 "name": "Nvme8", 00:28:16.878 "trtype": "tcp", 00:28:16.878 "traddr": "10.0.0.2", 00:28:16.878 "adrfam": "ipv4", 00:28:16.878 "trsvcid": "4420", 00:28:16.878 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:16.878 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:16.878 "hdgst": false, 00:28:16.878 "ddgst": false 00:28:16.878 }, 00:28:16.878 "method": "bdev_nvme_attach_controller" 00:28:16.878 },{ 00:28:16.878 "params": { 00:28:16.878 "name": "Nvme9", 00:28:16.878 "trtype": "tcp", 00:28:16.878 "traddr": "10.0.0.2", 00:28:16.878 "adrfam": "ipv4", 00:28:16.878 "trsvcid": "4420", 00:28:16.878 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:16.878 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:16.878 "hdgst": false, 00:28:16.878 "ddgst": false 00:28:16.878 }, 00:28:16.878 "method": "bdev_nvme_attach_controller" 00:28:16.878 },{ 00:28:16.878 "params": { 00:28:16.878 "name": "Nvme10", 00:28:16.878 "trtype": "tcp", 00:28:16.878 "traddr": "10.0.0.2", 00:28:16.878 "adrfam": "ipv4", 00:28:16.878 "trsvcid": "4420", 00:28:16.878 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:16.878 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:16.878 "hdgst": false, 00:28:16.878 "ddgst": false 00:28:16.878 }, 00:28:16.878 "method": "bdev_nvme_attach_controller" 00:28:16.878 }' 00:28:16.878 [2024-12-14 16:41:46.905837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.878 [2024-12-14 16:41:46.928193] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.976 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.976 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:18.976 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:18.976 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.976 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.976 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.976 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1092753 00:28:18.976 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:18.976 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:19.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1092753 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1092636 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.913 { 00:28:19.913 "params": { 00:28:19.913 "name": "Nvme$subsystem", 00:28:19.913 "trtype": "$TEST_TRANSPORT", 00:28:19.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.913 "adrfam": "ipv4", 00:28:19.913 "trsvcid": "$NVMF_PORT", 00:28:19.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.913 "hdgst": ${hdgst:-false}, 00:28:19.913 "ddgst": ${ddgst:-false} 00:28:19.913 }, 00:28:19.913 "method": "bdev_nvme_attach_controller" 00:28:19.913 } 00:28:19.913 EOF 00:28:19.913 )") 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.913 { 00:28:19.913 "params": { 00:28:19.913 "name": "Nvme$subsystem", 00:28:19.913 "trtype": "$TEST_TRANSPORT", 00:28:19.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.913 "adrfam": "ipv4", 00:28:19.913 "trsvcid": "$NVMF_PORT", 00:28:19.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.913 "hdgst": ${hdgst:-false}, 00:28:19.913 "ddgst": ${ddgst:-false} 00:28:19.913 }, 00:28:19.913 "method": "bdev_nvme_attach_controller" 00:28:19.913 } 00:28:19.913 EOF 00:28:19.913 )") 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.913 { 00:28:19.913 "params": { 00:28:19.913 "name": "Nvme$subsystem", 00:28:19.913 "trtype": "$TEST_TRANSPORT", 00:28:19.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.913 "adrfam": "ipv4", 00:28:19.913 "trsvcid": "$NVMF_PORT", 00:28:19.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.913 "hdgst": ${hdgst:-false}, 00:28:19.913 "ddgst": ${ddgst:-false} 00:28:19.913 }, 00:28:19.913 "method": "bdev_nvme_attach_controller" 00:28:19.913 } 00:28:19.913 EOF 00:28:19.913 )") 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.913 { 00:28:19.913 "params": { 00:28:19.913 "name": "Nvme$subsystem", 00:28:19.913 "trtype": "$TEST_TRANSPORT", 00:28:19.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.913 "adrfam": "ipv4", 00:28:19.913 "trsvcid": "$NVMF_PORT", 00:28:19.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.913 "hdgst": ${hdgst:-false}, 00:28:19.913 "ddgst": ${ddgst:-false} 00:28:19.913 }, 00:28:19.913 "method": "bdev_nvme_attach_controller" 00:28:19.913 } 00:28:19.913 EOF 00:28:19.913 )") 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.913 { 00:28:19.913 "params": { 00:28:19.913 "name": "Nvme$subsystem", 00:28:19.913 "trtype": "$TEST_TRANSPORT", 00:28:19.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.913 "adrfam": "ipv4", 00:28:19.913 "trsvcid": "$NVMF_PORT", 00:28:19.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.913 "hdgst": ${hdgst:-false}, 00:28:19.913 "ddgst": ${ddgst:-false} 00:28:19.913 }, 00:28:19.913 "method": "bdev_nvme_attach_controller" 00:28:19.913 } 00:28:19.913 EOF 00:28:19.913 )") 00:28:19.913 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.914 { 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme$subsystem", 00:28:19.914 "trtype": "$TEST_TRANSPORT", 00:28:19.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "$NVMF_PORT", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.914 "hdgst": ${hdgst:-false}, 00:28:19.914 "ddgst": ${ddgst:-false} 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 } 00:28:19.914 EOF 00:28:19.914 )") 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.914 { 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme$subsystem", 00:28:19.914 "trtype": "$TEST_TRANSPORT", 00:28:19.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "$NVMF_PORT", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.914 "hdgst": ${hdgst:-false}, 00:28:19.914 "ddgst": ${ddgst:-false} 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 } 00:28:19.914 EOF 00:28:19.914 )") 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.914 [2024-12-14 16:41:49.766842] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:19.914 [2024-12-14 16:41:49.766894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093309 ] 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.914 { 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme$subsystem", 00:28:19.914 "trtype": "$TEST_TRANSPORT", 00:28:19.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "$NVMF_PORT", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.914 "hdgst": ${hdgst:-false}, 00:28:19.914 "ddgst": ${ddgst:-false} 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 } 00:28:19.914 EOF 00:28:19.914 )") 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.914 { 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme$subsystem", 00:28:19.914 "trtype": "$TEST_TRANSPORT", 00:28:19.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "$NVMF_PORT", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.914 "hdgst": ${hdgst:-false}, 00:28:19.914 "ddgst": ${ddgst:-false} 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 } 00:28:19.914 EOF 00:28:19.914 )") 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.914 { 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme$subsystem", 00:28:19.914 "trtype": "$TEST_TRANSPORT", 00:28:19.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "$NVMF_PORT", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.914 "hdgst": ${hdgst:-false}, 00:28:19.914 "ddgst": ${ddgst:-false} 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 } 00:28:19.914 EOF 00:28:19.914 )") 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:19.914 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme1", 00:28:19.914 "trtype": "tcp", 00:28:19.914 "traddr": "10.0.0.2", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "4420", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:19.914 "hdgst": false, 00:28:19.914 "ddgst": false 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 },{ 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme2", 00:28:19.914 "trtype": "tcp", 00:28:19.914 "traddr": "10.0.0.2", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "4420", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:19.914 "hdgst": false, 00:28:19.914 "ddgst": false 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 },{ 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme3", 00:28:19.914 "trtype": "tcp", 00:28:19.914 "traddr": "10.0.0.2", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "4420", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:19.914 "hdgst": false, 00:28:19.914 "ddgst": false 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 },{ 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme4", 00:28:19.914 "trtype": "tcp", 00:28:19.914 "traddr": "10.0.0.2", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "4420", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:19.914 "hdgst": false, 00:28:19.914 "ddgst": false 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 },{ 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme5", 00:28:19.914 "trtype": "tcp", 00:28:19.914 "traddr": "10.0.0.2", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "4420", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:19.914 "hdgst": false, 00:28:19.914 "ddgst": false 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 },{ 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme6", 00:28:19.914 "trtype": "tcp", 00:28:19.914 "traddr": "10.0.0.2", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "4420", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:19.914 "hdgst": false, 00:28:19.914 "ddgst": false 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 },{ 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme7", 00:28:19.914 "trtype": "tcp", 00:28:19.914 "traddr": "10.0.0.2", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "4420", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:19.914 "hdgst": false, 00:28:19.914 "ddgst": false 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 },{ 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme8", 00:28:19.914 "trtype": "tcp", 00:28:19.914 "traddr": "10.0.0.2", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "4420", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:19.914 "hdgst": false, 00:28:19.914 "ddgst": false 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 },{ 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme9", 00:28:19.914 "trtype": "tcp", 00:28:19.914 "traddr": "10.0.0.2", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "4420", 00:28:19.914 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:19.914 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:19.914 "hdgst": false, 00:28:19.914 "ddgst": false 00:28:19.914 }, 00:28:19.914 "method": "bdev_nvme_attach_controller" 00:28:19.914 },{ 00:28:19.914 "params": { 00:28:19.914 "name": "Nvme10", 00:28:19.914 "trtype": "tcp", 00:28:19.914 "traddr": "10.0.0.2", 00:28:19.914 "adrfam": "ipv4", 00:28:19.914 "trsvcid": "4420", 00:28:19.915 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:19.915 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:19.915 "hdgst": false, 00:28:19.915 "ddgst": false 00:28:19.915 }, 00:28:19.915 "method": "bdev_nvme_attach_controller" 00:28:19.915 }' 00:28:19.915 [2024-12-14 16:41:49.844830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.915 [2024-12-14 16:41:49.867221] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.291 Running I/O for 1 seconds... 00:28:22.485 2254.00 IOPS, 140.88 MiB/s 00:28:22.485 Latency(us) 00:28:22.485 [2024-12-14T15:41:52.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.485 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.485 Verification LBA range: start 0x0 length 0x400 00:28:22.485 Nvme1n1 : 1.12 284.50 17.78 0.00 0.00 222956.01 16352.79 221698.93 00:28:22.485 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.485 Verification LBA range: start 0x0 length 0x400 00:28:22.485 Nvme2n1 : 1.12 290.39 18.15 0.00 0.00 214951.98 5180.46 196732.83 00:28:22.485 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.485 Verification LBA range: start 0x0 length 0x400 00:28:22.485 Nvme3n1 : 1.11 292.32 18.27 0.00 0.00 206637.17 12046.14 217704.35 00:28:22.485 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.485 Verification LBA range: start 0x0 length 0x400 00:28:22.485 Nvme4n1 : 1.14 280.81 17.55 0.00 0.00 216695.27 14979.66 225693.50 00:28:22.485 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.485 Verification LBA range: start 0x0 length 0x400 00:28:22.485 Nvme5n1 : 1.09 234.14 14.63 0.00 0.00 255459.47 25215.76 228689.43 00:28:22.485 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.485 Verification LBA range: start 0x0 length 0x400 00:28:22.485 Nvme6n1 : 1.15 279.34 17.46 0.00 0.00 211706.44 28461.35 209715.20 00:28:22.485 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.485 Verification LBA range: start 0x0 length 0x400 00:28:22.485 Nvme7n1 : 1.13 287.00 17.94 0.00 0.00 202692.43 11796.48 215707.06 00:28:22.485 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.485 Verification LBA range: start 0x0 length 0x400 00:28:22.485 Nvme8n1 : 1.13 282.16 17.64 0.00 0.00 203208.22 11858.90 215707.06 00:28:22.485 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.485 Verification LBA range: start 0x0 length 0x400 00:28:22.485 Nvme9n1 : 1.14 279.94 17.50 0.00 0.00 201935.73 16976.94 224694.86 00:28:22.485 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.485 Verification LBA range: start 0x0 length 0x400 00:28:22.485 Nvme10n1 : 1.15 278.59 17.41 0.00 0.00 199995.20 16352.79 239674.51 00:28:22.485 [2024-12-14T15:41:52.571Z] =================================================================================================================== 00:28:22.485 [2024-12-14T15:41:52.571Z] Total : 2789.20 174.32 0.00 0.00 212750.93 5180.46 239674.51 00:28:22.485 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:22.485 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:22.485 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:22.485 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:22.485 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:22.485 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.485 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:22.485 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.485 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:22.485 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.485 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.485 rmmod nvme_tcp 00:28:22.485 rmmod nvme_fabrics 00:28:22.485 rmmod nvme_keyring 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1092636 ']' 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1092636 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1092636 ']' 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1092636 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092636 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:22.744 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092636' 00:28:22.744 killing process with pid 1092636 00:28:22.745 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1092636 00:28:22.745 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1092636 00:28:23.004 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.004 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.004 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.004 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:23.004 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:23.004 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.004 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.004 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.004 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.004 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.004 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.004 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.540 00:28:25.540 real 0m15.175s 00:28:25.540 user 0m33.274s 00:28:25.540 sys 0m5.830s 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:25.540 ************************************ 00:28:25.540 END TEST nvmf_shutdown_tc1 00:28:25.540 ************************************ 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:25.540 ************************************ 00:28:25.540 START TEST nvmf_shutdown_tc2 00:28:25.540 ************************************ 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:25.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:25.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:25.540 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:25.541 Found net devices under 0000:af:00.0: cvl_0_0 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:25.541 Found net devices under 0000:af:00.1: cvl_0_1 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:25.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:28:25.541 00:28:25.541 --- 10.0.0.2 ping statistics --- 00:28:25.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.541 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:28:25.541 00:28:25.541 --- 10.0.0.1 ping statistics --- 00:28:25.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.541 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1094340 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1094340 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1094340 ']' 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.541 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.541 [2024-12-14 16:41:55.511202] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:25.541 [2024-12-14 16:41:55.511247] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.541 [2024-12-14 16:41:55.589424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.541 [2024-12-14 16:41:55.612183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.541 [2024-12-14 16:41:55.612223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.541 [2024-12-14 16:41:55.612230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.541 [2024-12-14 16:41:55.612236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.541 [2024-12-14 16:41:55.612242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.541 [2024-12-14 16:41:55.613551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.541 [2024-12-14 16:41:55.613585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.541 [2024-12-14 16:41:55.613707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.541 [2024-12-14 16:41:55.613708] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.801 [2024-12-14 16:41:55.741944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.801 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.801 Malloc1 00:28:25.801 [2024-12-14 16:41:55.864445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.801 Malloc2 00:28:26.060 Malloc3 00:28:26.060 Malloc4 00:28:26.060 Malloc5 00:28:26.060 Malloc6 00:28:26.060 Malloc7 00:28:26.319 Malloc8 00:28:26.319 Malloc9 00:28:26.319 Malloc10 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1094446 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1094446 /var/tmp/bdevperf.sock 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1094446 ']' 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:26.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.320 { 00:28:26.320 "params": { 00:28:26.320 "name": "Nvme$subsystem", 00:28:26.320 "trtype": "$TEST_TRANSPORT", 00:28:26.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.320 "adrfam": "ipv4", 00:28:26.320 "trsvcid": "$NVMF_PORT", 00:28:26.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.320 "hdgst": ${hdgst:-false}, 00:28:26.320 "ddgst": ${ddgst:-false} 00:28:26.320 }, 00:28:26.320 "method": "bdev_nvme_attach_controller" 00:28:26.320 } 00:28:26.320 EOF 00:28:26.320 )") 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.320 { 00:28:26.320 "params": { 00:28:26.320 "name": "Nvme$subsystem", 00:28:26.320 "trtype": "$TEST_TRANSPORT", 00:28:26.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.320 "adrfam": "ipv4", 00:28:26.320 "trsvcid": "$NVMF_PORT", 00:28:26.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.320 "hdgst": ${hdgst:-false}, 00:28:26.320 "ddgst": ${ddgst:-false} 00:28:26.320 }, 00:28:26.320 "method": "bdev_nvme_attach_controller" 00:28:26.320 } 00:28:26.320 EOF 00:28:26.320 )") 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.320 { 00:28:26.320 "params": { 00:28:26.320 "name": "Nvme$subsystem", 00:28:26.320 "trtype": "$TEST_TRANSPORT", 00:28:26.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.320 "adrfam": "ipv4", 00:28:26.320 "trsvcid": "$NVMF_PORT", 00:28:26.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.320 "hdgst": ${hdgst:-false}, 00:28:26.320 "ddgst": ${ddgst:-false} 00:28:26.320 }, 00:28:26.320 "method": "bdev_nvme_attach_controller" 00:28:26.320 } 00:28:26.320 EOF 00:28:26.320 )") 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.320 { 00:28:26.320 "params": { 00:28:26.320 "name": "Nvme$subsystem", 00:28:26.320 "trtype": "$TEST_TRANSPORT", 00:28:26.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.320 "adrfam": "ipv4", 00:28:26.320 "trsvcid": "$NVMF_PORT", 00:28:26.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.320 "hdgst": ${hdgst:-false}, 00:28:26.320 "ddgst": ${ddgst:-false} 00:28:26.320 }, 00:28:26.320 "method": "bdev_nvme_attach_controller" 00:28:26.320 } 00:28:26.320 EOF 00:28:26.320 )") 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.320 { 00:28:26.320 "params": { 00:28:26.320 "name": "Nvme$subsystem", 00:28:26.320 "trtype": "$TEST_TRANSPORT", 00:28:26.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.320 "adrfam": "ipv4", 00:28:26.320 "trsvcid": "$NVMF_PORT", 00:28:26.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.320 "hdgst": ${hdgst:-false}, 00:28:26.320 "ddgst": ${ddgst:-false} 00:28:26.320 }, 00:28:26.320 "method": "bdev_nvme_attach_controller" 00:28:26.320 } 00:28:26.320 EOF 00:28:26.320 )") 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.320 { 00:28:26.320 "params": { 00:28:26.320 "name": "Nvme$subsystem", 00:28:26.320 "trtype": "$TEST_TRANSPORT", 00:28:26.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.320 "adrfam": "ipv4", 00:28:26.320 "trsvcid": "$NVMF_PORT", 00:28:26.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.320 "hdgst": ${hdgst:-false}, 00:28:26.320 "ddgst": ${ddgst:-false} 00:28:26.320 }, 00:28:26.320 "method": "bdev_nvme_attach_controller" 00:28:26.320 } 00:28:26.320 EOF 00:28:26.320 )") 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.320 { 00:28:26.320 "params": { 00:28:26.320 "name": "Nvme$subsystem", 00:28:26.320 "trtype": "$TEST_TRANSPORT", 00:28:26.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.320 "adrfam": "ipv4", 00:28:26.320 "trsvcid": "$NVMF_PORT", 00:28:26.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.320 "hdgst": ${hdgst:-false}, 00:28:26.320 "ddgst": ${ddgst:-false} 00:28:26.320 }, 00:28:26.320 "method": "bdev_nvme_attach_controller" 00:28:26.320 } 00:28:26.320 EOF 00:28:26.320 )") 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.320 [2024-12-14 16:41:56.337927] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:26.320 [2024-12-14 16:41:56.337972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094446 ] 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.320 { 00:28:26.320 "params": { 00:28:26.320 "name": "Nvme$subsystem", 00:28:26.320 "trtype": "$TEST_TRANSPORT", 00:28:26.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.320 "adrfam": "ipv4", 00:28:26.320 "trsvcid": "$NVMF_PORT", 00:28:26.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.320 "hdgst": ${hdgst:-false}, 00:28:26.320 "ddgst": ${ddgst:-false} 00:28:26.320 }, 00:28:26.320 "method": "bdev_nvme_attach_controller" 00:28:26.320 } 00:28:26.320 EOF 00:28:26.320 )") 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.320 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.320 { 00:28:26.320 "params": { 00:28:26.320 "name": "Nvme$subsystem", 00:28:26.320 "trtype": "$TEST_TRANSPORT", 00:28:26.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.320 "adrfam": "ipv4", 00:28:26.320 "trsvcid": "$NVMF_PORT", 00:28:26.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.320 "hdgst": ${hdgst:-false}, 00:28:26.320 "ddgst": ${ddgst:-false} 00:28:26.320 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 } 00:28:26.321 EOF 00:28:26.321 )") 00:28:26.321 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.321 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.321 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.321 { 00:28:26.321 "params": { 00:28:26.321 "name": "Nvme$subsystem", 00:28:26.321 "trtype": "$TEST_TRANSPORT", 00:28:26.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.321 "adrfam": "ipv4", 00:28:26.321 "trsvcid": "$NVMF_PORT", 00:28:26.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.321 "hdgst": ${hdgst:-false}, 00:28:26.321 "ddgst": ${ddgst:-false} 00:28:26.321 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 } 00:28:26.321 EOF 00:28:26.321 )") 00:28:26.321 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.321 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:26.321 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:26.321 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:26.321 "params": { 00:28:26.321 "name": "Nvme1", 00:28:26.321 "trtype": "tcp", 00:28:26.321 "traddr": "10.0.0.2", 00:28:26.321 "adrfam": "ipv4", 00:28:26.321 "trsvcid": "4420", 00:28:26.321 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:26.321 "hdgst": false, 00:28:26.321 "ddgst": false 00:28:26.321 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 },{ 00:28:26.321 "params": { 00:28:26.321 "name": "Nvme2", 00:28:26.321 "trtype": "tcp", 00:28:26.321 "traddr": "10.0.0.2", 00:28:26.321 "adrfam": "ipv4", 00:28:26.321 "trsvcid": "4420", 00:28:26.321 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:26.321 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:26.321 "hdgst": false, 00:28:26.321 "ddgst": false 00:28:26.321 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 },{ 00:28:26.321 "params": { 00:28:26.321 "name": "Nvme3", 00:28:26.321 "trtype": "tcp", 00:28:26.321 "traddr": "10.0.0.2", 00:28:26.321 "adrfam": "ipv4", 00:28:26.321 "trsvcid": "4420", 00:28:26.321 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:26.321 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:26.321 "hdgst": false, 00:28:26.321 "ddgst": false 00:28:26.321 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 },{ 00:28:26.321 "params": { 00:28:26.321 "name": "Nvme4", 00:28:26.321 "trtype": "tcp", 00:28:26.321 "traddr": "10.0.0.2", 00:28:26.321 "adrfam": "ipv4", 00:28:26.321 "trsvcid": "4420", 00:28:26.321 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:26.321 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:26.321 "hdgst": false, 00:28:26.321 "ddgst": false 00:28:26.321 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 },{ 00:28:26.321 "params": { 00:28:26.321 "name": "Nvme5", 00:28:26.321 "trtype": "tcp", 00:28:26.321 "traddr": "10.0.0.2", 00:28:26.321 "adrfam": "ipv4", 00:28:26.321 "trsvcid": "4420", 00:28:26.321 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:26.321 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:26.321 "hdgst": false, 00:28:26.321 "ddgst": false 00:28:26.321 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 },{ 00:28:26.321 "params": { 00:28:26.321 "name": "Nvme6", 00:28:26.321 "trtype": "tcp", 00:28:26.321 "traddr": "10.0.0.2", 00:28:26.321 "adrfam": "ipv4", 00:28:26.321 "trsvcid": "4420", 00:28:26.321 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:26.321 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:26.321 "hdgst": false, 00:28:26.321 "ddgst": false 00:28:26.321 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 },{ 00:28:26.321 "params": { 00:28:26.321 "name": "Nvme7", 00:28:26.321 "trtype": "tcp", 00:28:26.321 "traddr": "10.0.0.2", 00:28:26.321 "adrfam": "ipv4", 00:28:26.321 "trsvcid": "4420", 00:28:26.321 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:26.321 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:26.321 "hdgst": false, 00:28:26.321 "ddgst": false 00:28:26.321 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 },{ 00:28:26.321 "params": { 00:28:26.321 "name": "Nvme8", 00:28:26.321 "trtype": "tcp", 00:28:26.321 "traddr": "10.0.0.2", 00:28:26.321 "adrfam": "ipv4", 00:28:26.321 "trsvcid": "4420", 00:28:26.321 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:26.321 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:26.321 "hdgst": false, 00:28:26.321 "ddgst": false 00:28:26.321 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 },{ 00:28:26.321 "params": { 00:28:26.321 "name": "Nvme9", 00:28:26.321 "trtype": "tcp", 00:28:26.321 "traddr": "10.0.0.2", 00:28:26.321 "adrfam": "ipv4", 00:28:26.321 "trsvcid": "4420", 00:28:26.321 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:26.321 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:26.321 "hdgst": false, 00:28:26.321 "ddgst": false 00:28:26.321 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 },{ 00:28:26.321 "params": { 00:28:26.321 "name": "Nvme10", 00:28:26.321 "trtype": "tcp", 00:28:26.321 "traddr": "10.0.0.2", 00:28:26.321 "adrfam": "ipv4", 00:28:26.321 "trsvcid": "4420", 00:28:26.321 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:26.321 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:26.321 "hdgst": false, 00:28:26.321 "ddgst": false 00:28:26.321 }, 00:28:26.321 "method": "bdev_nvme_attach_controller" 00:28:26.321 }' 00:28:26.580 [2024-12-14 16:41:56.414832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.580 [2024-12-14 16:41:56.437105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.956 Running I/O for 10 seconds... 00:28:28.215 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.215 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:28.215 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:28.215 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.215 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.215 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.215 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1094446 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1094446 ']' 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1094446 00:28:28.216 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:28.475 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.475 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1094446 00:28:28.475 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:28.475 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:28.475 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1094446' 00:28:28.475 killing process with pid 1094446 00:28:28.475 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1094446 00:28:28.475 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1094446 00:28:28.475 Received shutdown signal, test time was about 0.639660 seconds 00:28:28.475 00:28:28.475 Latency(us) 00:28:28.475 [2024-12-14T15:41:58.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.475 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.475 Verification LBA range: start 0x0 length 0x400 00:28:28.475 Nvme1n1 : 0.62 313.82 19.61 0.00 0.00 199360.96 4431.48 202724.69 00:28:28.475 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.475 Verification LBA range: start 0x0 length 0x400 00:28:28.475 Nvme2n1 : 0.63 304.68 19.04 0.00 0.00 201471.19 16227.96 199728.76 00:28:28.475 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.475 Verification LBA range: start 0x0 length 0x400 00:28:28.475 Nvme3n1 : 0.64 302.20 18.89 0.00 0.00 196581.02 15042.07 197731.47 00:28:28.475 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.475 Verification LBA range: start 0x0 length 0x400 00:28:28.475 Nvme4n1 : 0.62 308.74 19.30 0.00 0.00 188424.78 26089.57 198730.12 00:28:28.475 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.475 Verification LBA range: start 0x0 length 0x400 00:28:28.475 Nvme5n1 : 0.63 302.55 18.91 0.00 0.00 187496.35 18474.91 217704.35 00:28:28.475 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.475 Verification LBA range: start 0x0 length 0x400 00:28:28.475 Nvme6n1 : 0.64 300.47 18.78 0.00 0.00 183733.88 16852.11 217704.35 00:28:28.475 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.475 Verification LBA range: start 0x0 length 0x400 00:28:28.475 Nvme7n1 : 0.63 306.29 19.14 0.00 0.00 174299.10 14168.26 187745.04 00:28:28.475 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.475 Verification LBA range: start 0x0 length 0x400 00:28:28.475 Nvme8n1 : 0.60 213.52 13.34 0.00 0.00 240526.14 14854.83 213709.78 00:28:28.475 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.475 Verification LBA range: start 0x0 length 0x400 00:28:28.475 Nvme9n1 : 0.61 209.83 13.11 0.00 0.00 238285.78 17101.78 218702.99 00:28:28.475 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.475 Verification LBA range: start 0x0 length 0x400 00:28:28.475 Nvme10n1 : 0.62 207.41 12.96 0.00 0.00 232653.78 17476.27 238675.87 00:28:28.475 [2024-12-14T15:41:58.561Z] =================================================================================================================== 00:28:28.475 [2024-12-14T15:41:58.561Z] Total : 2769.50 173.09 0.00 0.00 200627.93 4431.48 238675.87 00:28:28.734 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1094340 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:29.670 rmmod nvme_tcp 00:28:29.670 rmmod nvme_fabrics 00:28:29.670 rmmod nvme_keyring 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1094340 ']' 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1094340 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1094340 ']' 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1094340 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1094340 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1094340' 00:28:29.670 killing process with pid 1094340 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1094340 00:28:29.670 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1094340 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.238 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.143 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:32.143 00:28:32.143 real 0m7.003s 00:28:32.143 user 0m20.016s 00:28:32.143 sys 0m1.223s 00:28:32.143 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.143 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.143 ************************************ 00:28:32.143 END TEST nvmf_shutdown_tc2 00:28:32.143 ************************************ 00:28:32.143 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:32.143 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:32.143 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.143 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:32.403 ************************************ 00:28:32.403 START TEST nvmf_shutdown_tc3 00:28:32.403 ************************************ 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:32.403 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.403 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:32.404 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:32.404 Found net devices under 0000:af:00.0: cvl_0_0 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:32.404 Found net devices under 0000:af:00.1: cvl_0_1 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:32.404 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:32.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:28:32.663 00:28:32.663 --- 10.0.0.2 ping statistics --- 00:28:32.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.663 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:28:32.663 00:28:32.663 --- 10.0.0.1 ping statistics --- 00:28:32.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.663 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1095747 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1095747 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1095747 ']' 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.663 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.663 [2024-12-14 16:42:02.599296] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:32.663 [2024-12-14 16:42:02.599340] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.663 [2024-12-14 16:42:02.677646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:32.663 [2024-12-14 16:42:02.699410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.663 [2024-12-14 16:42:02.699450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.663 [2024-12-14 16:42:02.699456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.663 [2024-12-14 16:42:02.699462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.663 [2024-12-14 16:42:02.699467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.663 [2024-12-14 16:42:02.700877] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.663 [2024-12-14 16:42:02.700985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.663 [2024-12-14 16:42:02.701093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.663 [2024-12-14 16:42:02.701094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.922 [2024-12-14 16:42:02.836820] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.922 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.923 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.923 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.923 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.923 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.923 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.923 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:32.923 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:32.923 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.923 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.923 Malloc1 00:28:32.923 [2024-12-14 16:42:02.949016] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.923 Malloc2 00:28:33.181 Malloc3 00:28:33.181 Malloc4 00:28:33.181 Malloc5 00:28:33.181 Malloc6 00:28:33.181 Malloc7 00:28:33.181 Malloc8 00:28:33.441 Malloc9 00:28:33.441 Malloc10 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1095863 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1095863 /var/tmp/bdevperf.sock 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1095863 ']' 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:33.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.441 { 00:28:33.441 "params": { 00:28:33.441 "name": "Nvme$subsystem", 00:28:33.441 "trtype": "$TEST_TRANSPORT", 00:28:33.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.441 "adrfam": "ipv4", 00:28:33.441 "trsvcid": "$NVMF_PORT", 00:28:33.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.441 "hdgst": ${hdgst:-false}, 00:28:33.441 "ddgst": ${ddgst:-false} 00:28:33.441 }, 00:28:33.441 "method": "bdev_nvme_attach_controller" 00:28:33.441 } 00:28:33.441 EOF 00:28:33.441 )") 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.441 { 00:28:33.441 "params": { 00:28:33.441 "name": "Nvme$subsystem", 00:28:33.441 "trtype": "$TEST_TRANSPORT", 00:28:33.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.441 "adrfam": "ipv4", 00:28:33.441 "trsvcid": "$NVMF_PORT", 00:28:33.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.441 "hdgst": ${hdgst:-false}, 00:28:33.441 "ddgst": ${ddgst:-false} 00:28:33.441 }, 00:28:33.441 "method": "bdev_nvme_attach_controller" 00:28:33.441 } 00:28:33.441 EOF 00:28:33.441 )") 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.441 { 00:28:33.441 "params": { 00:28:33.441 "name": "Nvme$subsystem", 00:28:33.441 "trtype": "$TEST_TRANSPORT", 00:28:33.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.441 "adrfam": "ipv4", 00:28:33.441 "trsvcid": "$NVMF_PORT", 00:28:33.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.441 "hdgst": ${hdgst:-false}, 00:28:33.441 "ddgst": ${ddgst:-false} 00:28:33.441 }, 00:28:33.441 "method": "bdev_nvme_attach_controller" 00:28:33.441 } 00:28:33.441 EOF 00:28:33.441 )") 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.441 { 00:28:33.441 "params": { 00:28:33.441 "name": "Nvme$subsystem", 00:28:33.441 "trtype": "$TEST_TRANSPORT", 00:28:33.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.441 "adrfam": "ipv4", 00:28:33.441 "trsvcid": "$NVMF_PORT", 00:28:33.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.441 "hdgst": ${hdgst:-false}, 00:28:33.441 "ddgst": ${ddgst:-false} 00:28:33.441 }, 00:28:33.441 "method": "bdev_nvme_attach_controller" 00:28:33.441 } 00:28:33.441 EOF 00:28:33.441 )") 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.441 { 00:28:33.441 "params": { 00:28:33.441 "name": "Nvme$subsystem", 00:28:33.441 "trtype": "$TEST_TRANSPORT", 00:28:33.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.441 "adrfam": "ipv4", 00:28:33.441 "trsvcid": "$NVMF_PORT", 00:28:33.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.441 "hdgst": ${hdgst:-false}, 00:28:33.441 "ddgst": ${ddgst:-false} 00:28:33.441 }, 00:28:33.441 "method": "bdev_nvme_attach_controller" 00:28:33.441 } 00:28:33.441 EOF 00:28:33.441 )") 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.441 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.441 { 00:28:33.441 "params": { 00:28:33.441 "name": "Nvme$subsystem", 00:28:33.441 "trtype": "$TEST_TRANSPORT", 00:28:33.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.441 "adrfam": "ipv4", 00:28:33.441 "trsvcid": "$NVMF_PORT", 00:28:33.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.441 "hdgst": ${hdgst:-false}, 00:28:33.442 "ddgst": ${ddgst:-false} 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 } 00:28:33.442 EOF 00:28:33.442 )") 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.442 [2024-12-14 16:42:03.415638] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:33.442 [2024-12-14 16:42:03.415685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095863 ] 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.442 { 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme$subsystem", 00:28:33.442 "trtype": "$TEST_TRANSPORT", 00:28:33.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "$NVMF_PORT", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.442 "hdgst": ${hdgst:-false}, 00:28:33.442 "ddgst": ${ddgst:-false} 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 } 00:28:33.442 EOF 00:28:33.442 )") 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.442 { 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme$subsystem", 00:28:33.442 "trtype": "$TEST_TRANSPORT", 00:28:33.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "$NVMF_PORT", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.442 "hdgst": ${hdgst:-false}, 00:28:33.442 "ddgst": ${ddgst:-false} 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 } 00:28:33.442 EOF 00:28:33.442 )") 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.442 { 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme$subsystem", 00:28:33.442 "trtype": "$TEST_TRANSPORT", 00:28:33.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "$NVMF_PORT", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.442 "hdgst": ${hdgst:-false}, 00:28:33.442 "ddgst": ${ddgst:-false} 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 } 00:28:33.442 EOF 00:28:33.442 )") 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.442 { 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme$subsystem", 00:28:33.442 "trtype": "$TEST_TRANSPORT", 00:28:33.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "$NVMF_PORT", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.442 "hdgst": ${hdgst:-false}, 00:28:33.442 "ddgst": ${ddgst:-false} 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 } 00:28:33.442 EOF 00:28:33.442 )") 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:33.442 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme1", 00:28:33.442 "trtype": "tcp", 00:28:33.442 "traddr": "10.0.0.2", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "4420", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:33.442 "hdgst": false, 00:28:33.442 "ddgst": false 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 },{ 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme2", 00:28:33.442 "trtype": "tcp", 00:28:33.442 "traddr": "10.0.0.2", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "4420", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:33.442 "hdgst": false, 00:28:33.442 "ddgst": false 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 },{ 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme3", 00:28:33.442 "trtype": "tcp", 00:28:33.442 "traddr": "10.0.0.2", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "4420", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:33.442 "hdgst": false, 00:28:33.442 "ddgst": false 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 },{ 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme4", 00:28:33.442 "trtype": "tcp", 00:28:33.442 "traddr": "10.0.0.2", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "4420", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:33.442 "hdgst": false, 00:28:33.442 "ddgst": false 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 },{ 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme5", 00:28:33.442 "trtype": "tcp", 00:28:33.442 "traddr": "10.0.0.2", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "4420", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:33.442 "hdgst": false, 00:28:33.442 "ddgst": false 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 },{ 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme6", 00:28:33.442 "trtype": "tcp", 00:28:33.442 "traddr": "10.0.0.2", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "4420", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:33.442 "hdgst": false, 00:28:33.442 "ddgst": false 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 },{ 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme7", 00:28:33.442 "trtype": "tcp", 00:28:33.442 "traddr": "10.0.0.2", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "4420", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:33.442 "hdgst": false, 00:28:33.442 "ddgst": false 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 },{ 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme8", 00:28:33.442 "trtype": "tcp", 00:28:33.442 "traddr": "10.0.0.2", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "4420", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:33.442 "hdgst": false, 00:28:33.442 "ddgst": false 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 },{ 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme9", 00:28:33.442 "trtype": "tcp", 00:28:33.442 "traddr": "10.0.0.2", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "4420", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:33.442 "hdgst": false, 00:28:33.442 "ddgst": false 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 },{ 00:28:33.442 "params": { 00:28:33.442 "name": "Nvme10", 00:28:33.442 "trtype": "tcp", 00:28:33.442 "traddr": "10.0.0.2", 00:28:33.442 "adrfam": "ipv4", 00:28:33.442 "trsvcid": "4420", 00:28:33.442 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:33.442 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:33.442 "hdgst": false, 00:28:33.442 "ddgst": false 00:28:33.442 }, 00:28:33.442 "method": "bdev_nvme_attach_controller" 00:28:33.442 }' 00:28:33.442 [2024-12-14 16:42:03.489860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.442 [2024-12-14 16:42:03.512304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.346 Running I/O for 10 seconds... 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=22 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 22 -ge 100 ']' 00:28:35.346 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1095747 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1095747 ']' 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1095747 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:35.604 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.870 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1095747 00:28:35.870 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:35.870 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:35.870 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1095747' 00:28:35.870 killing process with pid 1095747 00:28:35.870 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1095747 00:28:35.870 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1095747 00:28:35.870 [2024-12-14 16:42:05.745364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.870 [2024-12-14 16:42:05.745766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.745838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ef00 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.746734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361980 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.746766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361980 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.747958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f3f0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.871 [2024-12-14 16:42:05.749571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.749689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f8c0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.750994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.751170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135fdb0 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.872 [2024-12-14 16:42:05.752480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.752684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360770 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.753998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360af0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.873 [2024-12-14 16:42:05.754918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.754999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.755283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fc0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.760470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdf610 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.760594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3cd0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.760675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7de0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.760752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c9630 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.760831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.874 [2024-12-14 16:42:05.760886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d0ca0 is same with the state(6) to be set 00:28:35.874 [2024-12-14 16:42:05.760908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.874 [2024-12-14 16:42:05.760916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.760923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.760929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.760936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.760942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.760950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.760956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.760962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4140 is same with the state(6) to be set 00:28:35.875 [2024-12-14 16:42:05.760984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.760992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.760999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2533e90 is same with the state(6) to be set 00:28:35.875 [2024-12-14 16:42:05.761063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25340b0 is same with the state(6) to be set 00:28:35.875 [2024-12-14 16:42:05.761140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ff370 is same with the state(6) to be set 00:28:35.875 [2024-12-14 16:42:05.761214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.875 [2024-12-14 16:42:05.761263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.761270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251a530 is same with the state(6) to be set 00:28:35.875 [2024-12-14 16:42:05.761983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.875 [2024-12-14 16:42:05.762458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.875 [2024-12-14 16:42:05.762466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.762944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.762951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.876 [2024-12-14 16:42:05.763462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.876 [2024-12-14 16:42:05.763470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.763690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.763698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.769990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.769997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.770005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.770011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.770019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.770026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.770034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.770040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.770048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.770054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.770062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.770069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.770077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.770084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.770092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.770098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.770106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.770112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.770120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.877 [2024-12-14 16:42:05.770127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.877 [2024-12-14 16:42:05.770136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.770142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.770150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.770157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.770165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.770171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.770179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.770186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.770194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.770201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.771400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdf610 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.771432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d3cd0 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.771447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f7de0 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.771458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c9630 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.771472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d0ca0 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.771487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d4140 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.771501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2533e90 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.771516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25340b0 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.771528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ff370 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.771543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251a530 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.772878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:35.878 [2024-12-14 16:42:05.774604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:35.878 [2024-12-14 16:42:05.774832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.878 [2024-12-14 16:42:05.774851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f7de0 with addr=10.0.0.2, port=4420 00:28:35.878 [2024-12-14 16:42:05.774860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7de0 is same with the state(6) to be set 00:28:35.878 [2024-12-14 16:42:05.774911] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.878 [2024-12-14 16:42:05.774958] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.878 [2024-12-14 16:42:05.774998] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.878 [2024-12-14 16:42:05.775043] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.878 [2024-12-14 16:42:05.775308] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.878 [2024-12-14 16:42:05.775352] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.878 [2024-12-14 16:42:05.775397] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:35.878 [2024-12-14 16:42:05.775600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.878 [2024-12-14 16:42:05.775617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25340b0 with addr=10.0.0.2, port=4420 00:28:35.878 [2024-12-14 16:42:05.775625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25340b0 is same with the state(6) to be set 00:28:35.878 [2024-12-14 16:42:05.775636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f7de0 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.775953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:35.878 [2024-12-14 16:42:05.775978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25340b0 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.775989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:35.878 [2024-12-14 16:42:05.775996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:35.878 [2024-12-14 16:42:05.776004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:35.878 [2024-12-14 16:42:05.776013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:35.878 [2024-12-14 16:42:05.776138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.878 [2024-12-14 16:42:05.776151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2533e90 with addr=10.0.0.2, port=4420 00:28:35.878 [2024-12-14 16:42:05.776158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2533e90 is same with the state(6) to be set 00:28:35.878 [2024-12-14 16:42:05.776165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:35.878 [2024-12-14 16:42:05.776171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:35.878 [2024-12-14 16:42:05.776178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:35.878 [2024-12-14 16:42:05.776185] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:35.878 [2024-12-14 16:42:05.776223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2533e90 (9): Bad file descriptor 00:28:35.878 [2024-12-14 16:42:05.776254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:35.878 [2024-12-14 16:42:05.776261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:35.878 [2024-12-14 16:42:05.776267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:35.878 [2024-12-14 16:42:05.776272] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:35.878 [2024-12-14 16:42:05.781520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.878 [2024-12-14 16:42:05.781948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.878 [2024-12-14 16:42:05.781954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.781965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.781971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.781980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.781987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.781995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.782530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.782538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d7eb0 is same with the state(6) to be set 00:28:35.879 [2024-12-14 16:42:05.783587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.783599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.783610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.783618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.783627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.783634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.783644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.783651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.783660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.783667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.783676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.783683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.783692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.783699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.783707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.783714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.783723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.783730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.783738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.783745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.879 [2024-12-14 16:42:05.783753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.879 [2024-12-14 16:42:05.783760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.783989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.783996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.880 [2024-12-14 16:42:05.784427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.880 [2024-12-14 16:42:05.784434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.784442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.784450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.784458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.784465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.784474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.784481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.784489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.784496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.784505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.784512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.784520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.784527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.784536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.784543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.784552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.784567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.784575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.784584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.784593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.784600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.784608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8e80 is same with the state(6) to be set 00:28:35.881 [2024-12-14 16:42:05.785640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.785986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.785994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.786009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.786026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.786042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.786058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.786073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.786089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.786105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.786120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.786135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.786150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.881 [2024-12-14 16:42:05.786165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.881 [2024-12-14 16:42:05.786172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.786640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.786648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d7e90 is same with the state(6) to be set 00:28:35.882 [2024-12-14 16:42:05.787686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.882 [2024-12-14 16:42:05.787877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.882 [2024-12-14 16:42:05.787883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.787892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.787898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.787906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.787913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.787921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.787927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.787937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.787943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.787951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.787958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.787966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.787972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.787980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.787986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.787994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.883 [2024-12-14 16:42:05.788530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.883 [2024-12-14 16:42:05.788538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.788545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.788553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.788567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.788577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.788583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.788591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.788598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.788606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.788612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.788621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.788627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.788635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.788642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.788649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d90e0 is same with the state(6) to be set 00:28:35.884 [2024-12-14 16:42:05.789670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.789985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.789994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.884 [2024-12-14 16:42:05.790170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.884 [2024-12-14 16:42:05.790177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.790668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.790675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24db680 is same with the state(6) to be set 00:28:35.885 [2024-12-14 16:42:05.791690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.791703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.791714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.791722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.791730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.791737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.791745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.791751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.791760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.791767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.791775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.791784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.791792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.791799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.791807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.885 [2024-12-14 16:42:05.791813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.885 [2024-12-14 16:42:05.791821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.791991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.791999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.886 [2024-12-14 16:42:05.792461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.886 [2024-12-14 16:42:05.792467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.792656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.792663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x31d5600 is same with the state(6) to be set 00:28:35.887 [2024-12-14 16:42:05.793647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.793990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.793996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.794004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.794011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.794019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.794025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.794033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.794040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.794049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.794055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.794063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.794070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.794078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.794086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.794094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.794100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.887 [2024-12-14 16:42:05.794109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.887 [2024-12-14 16:42:05.794115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.794478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.794486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.798422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.798435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.798443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.798452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.798458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.798466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.798473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.798481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.798488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.798495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.798502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.798510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.798516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.798524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.888 [2024-12-14 16:42:05.798531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.888 [2024-12-14 16:42:05.798538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3422f50 is same with the state(6) to be set 00:28:35.888 [2024-12-14 16:42:05.799502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:35.888 [2024-12-14 16:42:05.799522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:35.888 [2024-12-14 16:42:05.799532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:35.888 [2024-12-14 16:42:05.799543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:35.888 [2024-12-14 16:42:05.799626] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:35.888 [2024-12-14 16:42:05.799643] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:35.888 [2024-12-14 16:42:05.799654] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:35.888 [2024-12-14 16:42:05.799728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:35.888 [2024-12-14 16:42:05.799739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:35.888 task offset: 16384 on job bdev=Nvme10n1 fails 00:28:35.888 00:28:35.888 Latency(us) 00:28:35.888 [2024-12-14T15:42:05.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.889 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.889 Job: Nvme1n1 ended in about 0.71 seconds with error 00:28:35.889 Verification LBA range: start 0x0 length 0x400 00:28:35.889 Nvme1n1 : 0.71 181.53 11.35 90.77 0.00 232008.98 15042.07 214708.42 00:28:35.889 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.889 Job: Nvme2n1 ended in about 0.71 seconds with error 00:28:35.889 Verification LBA range: start 0x0 length 0x400 00:28:35.889 Nvme2n1 : 0.71 181.00 11.31 90.50 0.00 227461.61 21096.35 236678.58 00:28:35.889 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.889 Job: Nvme3n1 ended in about 0.71 seconds with error 00:28:35.889 Verification LBA range: start 0x0 length 0x400 00:28:35.889 Nvme3n1 : 0.71 180.49 11.28 90.24 0.00 222939.75 23592.96 217704.35 00:28:35.889 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.889 Job: Nvme4n1 ended in about 0.71 seconds with error 00:28:35.889 Verification LBA range: start 0x0 length 0x400 00:28:35.889 Nvme4n1 : 0.71 185.61 11.60 89.99 0.00 213951.16 14230.67 235679.94 00:28:35.889 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.889 Job: Nvme5n1 ended in about 0.69 seconds with error 00:28:35.889 Verification LBA range: start 0x0 length 0x400 00:28:35.889 Nvme5n1 : 0.69 184.35 11.52 92.17 0.00 207646.07 10173.68 232684.01 00:28:35.889 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.889 Job: Nvme6n1 ended in about 0.71 seconds with error 00:28:35.889 Verification LBA range: start 0x0 length 0x400 00:28:35.889 Nvme6n1 : 0.71 186.48 11.66 89.74 0.00 203461.00 37449.14 202724.69 00:28:35.889 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.889 Job: Nvme7n1 ended in about 0.72 seconds with error 00:28:35.889 Verification LBA range: start 0x0 length 0x400 00:28:35.889 Nvme7n1 : 0.72 178.97 11.19 89.49 0.00 204330.26 26089.57 226692.14 00:28:35.889 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.889 Job: Nvme8n1 ended in about 0.72 seconds with error 00:28:35.889 Verification LBA range: start 0x0 length 0x400 00:28:35.889 Nvme8n1 : 0.72 177.52 11.09 88.76 0.00 201112.30 14043.43 198730.12 00:28:35.889 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.889 Verification LBA range: start 0x0 length 0x400 00:28:35.889 Nvme9n1 : 0.70 183.90 11.49 0.00 0.00 281490.29 17850.76 263641.97 00:28:35.889 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.889 Job: Nvme10n1 ended in about 0.69 seconds with error 00:28:35.889 Verification LBA range: start 0x0 length 0x400 00:28:35.889 Nvme10n1 : 0.69 184.70 11.54 92.35 0.00 181447.92 10173.68 245666.38 00:28:35.889 [2024-12-14T15:42:05.975Z] =================================================================================================================== 00:28:35.889 [2024-12-14T15:42:05.975Z] Total : 1824.55 114.03 814.01 0.00 215346.28 10173.68 263641.97 00:28:35.889 [2024-12-14 16:42:05.831020] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:35.889 [2024-12-14 16:42:05.831072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:35.889 [2024-12-14 16:42:05.831416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.889 [2024-12-14 16:42:05.831435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d4140 with addr=10.0.0.2, port=4420 00:28:35.889 [2024-12-14 16:42:05.831446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4140 is same with the state(6) to be set 00:28:35.889 [2024-12-14 16:42:05.831640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.889 [2024-12-14 16:42:05.831651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d3cd0 with addr=10.0.0.2, port=4420 00:28:35.889 [2024-12-14 16:42:05.831659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3cd0 is same with the state(6) to be set 00:28:35.889 [2024-12-14 16:42:05.831783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.889 [2024-12-14 16:42:05.831794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c9630 with addr=10.0.0.2, port=4420 00:28:35.889 [2024-12-14 16:42:05.831801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c9630 is same with the state(6) to be set 00:28:35.889 [2024-12-14 16:42:05.831950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.889 [2024-12-14 16:42:05.831960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d0ca0 with addr=10.0.0.2, port=4420 00:28:35.889 [2024-12-14 16:42:05.831967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d0ca0 is same with the state(6) to be set 00:28:35.889 [2024-12-14 16:42:05.833530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:35.889 [2024-12-14 16:42:05.833546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:35.889 [2024-12-14 16:42:05.833734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.889 [2024-12-14 16:42:05.833749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ff370 with addr=10.0.0.2, port=4420 00:28:35.889 [2024-12-14 16:42:05.833757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ff370 is same with the state(6) to be set 00:28:35.889 [2024-12-14 16:42:05.833898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.889 [2024-12-14 16:42:05.833909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdf610 with addr=10.0.0.2, port=4420 00:28:35.889 [2024-12-14 16:42:05.833916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdf610 is same with the state(6) to be set 00:28:35.889 [2024-12-14 16:42:05.834014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.889 [2024-12-14 16:42:05.834026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251a530 with addr=10.0.0.2, port=4420 00:28:35.889 [2024-12-14 16:42:05.834034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251a530 is same with the state(6) to be set 00:28:35.889 [2024-12-14 16:42:05.834049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d4140 (9): Bad file descriptor 00:28:35.889 [2024-12-14 16:42:05.834062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d3cd0 (9): Bad file descriptor 00:28:35.889 [2024-12-14 16:42:05.834072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c9630 (9): Bad file descriptor 00:28:35.889 [2024-12-14 16:42:05.834081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d0ca0 (9): Bad file descriptor 00:28:35.889 [2024-12-14 16:42:05.834111] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:35.889 [2024-12-14 16:42:05.834133] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:35.889 [2024-12-14 16:42:05.834144] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:35.889 [2024-12-14 16:42:05.834156] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:35.889 [2024-12-14 16:42:05.834166] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:35.889 [2024-12-14 16:42:05.834243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:35.889 [2024-12-14 16:42:05.834422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.889 [2024-12-14 16:42:05.834435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f7de0 with addr=10.0.0.2, port=4420 00:28:35.889 [2024-12-14 16:42:05.834443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7de0 is same with the state(6) to be set 00:28:35.889 [2024-12-14 16:42:05.834507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.889 [2024-12-14 16:42:05.834518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25340b0 with addr=10.0.0.2, port=4420 00:28:35.889 [2024-12-14 16:42:05.834525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25340b0 is same with the state(6) to be set 00:28:35.889 [2024-12-14 16:42:05.834535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ff370 (9): Bad file descriptor 00:28:35.889 [2024-12-14 16:42:05.834544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdf610 (9): Bad file descriptor 00:28:35.889 [2024-12-14 16:42:05.834553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251a530 (9): Bad file descriptor 00:28:35.889 [2024-12-14 16:42:05.834568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:35.889 [2024-12-14 16:42:05.834575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:35.889 [2024-12-14 16:42:05.834583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:35.889 [2024-12-14 16:42:05.834592] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:35.889 [2024-12-14 16:42:05.834601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:35.889 [2024-12-14 16:42:05.834607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:35.889 [2024-12-14 16:42:05.834614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:35.889 [2024-12-14 16:42:05.834620] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:35.889 [2024-12-14 16:42:05.834627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:35.890 [2024-12-14 16:42:05.834632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:35.890 [2024-12-14 16:42:05.834639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:35.890 [2024-12-14 16:42:05.834645] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:35.890 [2024-12-14 16:42:05.834652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:35.890 [2024-12-14 16:42:05.834658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:35.890 [2024-12-14 16:42:05.834668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:35.890 [2024-12-14 16:42:05.834674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:35.890 [2024-12-14 16:42:05.834889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.890 [2024-12-14 16:42:05.834901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2533e90 with addr=10.0.0.2, port=4420 00:28:35.890 [2024-12-14 16:42:05.834908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2533e90 is same with the state(6) to be set 00:28:35.890 [2024-12-14 16:42:05.834917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f7de0 (9): Bad file descriptor 00:28:35.890 [2024-12-14 16:42:05.834926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25340b0 (9): Bad file descriptor 00:28:35.890 [2024-12-14 16:42:05.834934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:35.890 [2024-12-14 16:42:05.834940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:35.890 [2024-12-14 16:42:05.834946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:35.890 [2024-12-14 16:42:05.834953] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:35.890 [2024-12-14 16:42:05.834959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:35.890 [2024-12-14 16:42:05.834965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:35.890 [2024-12-14 16:42:05.834972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:35.890 [2024-12-14 16:42:05.834995] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:35.890 [2024-12-14 16:42:05.835003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:35.890 [2024-12-14 16:42:05.835009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:35.890 [2024-12-14 16:42:05.835016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:35.890 [2024-12-14 16:42:05.835023] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:35.890 [2024-12-14 16:42:05.835051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2533e90 (9): Bad file descriptor 00:28:35.890 [2024-12-14 16:42:05.835060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:35.890 [2024-12-14 16:42:05.835066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:35.890 [2024-12-14 16:42:05.835073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:35.890 [2024-12-14 16:42:05.835080] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:35.890 [2024-12-14 16:42:05.835088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:35.890 [2024-12-14 16:42:05.835094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:35.890 [2024-12-14 16:42:05.835101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:35.890 [2024-12-14 16:42:05.835107] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:35.890 [2024-12-14 16:42:05.835134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:35.890 [2024-12-14 16:42:05.835144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:35.890 [2024-12-14 16:42:05.835152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:35.890 [2024-12-14 16:42:05.835158] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:36.149 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1095863 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1095863 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1095863 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:37.086 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:37.087 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:37.087 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:37.087 rmmod nvme_tcp 00:28:37.346 rmmod nvme_fabrics 00:28:37.346 rmmod nvme_keyring 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1095747 ']' 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1095747 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1095747 ']' 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1095747 00:28:37.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1095747) - No such process 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1095747 is not found' 00:28:37.346 Process with pid 1095747 is not found 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.346 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.251 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.251 00:28:39.251 real 0m7.066s 00:28:39.251 user 0m16.193s 00:28:39.251 sys 0m1.221s 00:28:39.251 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.251 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:39.251 ************************************ 00:28:39.251 END TEST nvmf_shutdown_tc3 00:28:39.251 ************************************ 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:39.511 ************************************ 00:28:39.511 START TEST nvmf_shutdown_tc4 00:28:39.511 ************************************ 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:39.511 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.511 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:39.512 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:39.512 Found net devices under 0000:af:00.0: cvl_0_0 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:39.512 Found net devices under 0000:af:00.1: cvl_0_1 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:39.512 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:39.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:28:39.772 00:28:39.772 --- 10.0.0.2 ping statistics --- 00:28:39.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.772 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:28:39.772 00:28:39.772 --- 10.0.0.1 ping statistics --- 00:28:39.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.772 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1097467 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1097467 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1097467 ']' 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.772 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:39.772 [2024-12-14 16:42:09.773446] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:39.772 [2024-12-14 16:42:09.773490] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.772 [2024-12-14 16:42:09.832838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.772 [2024-12-14 16:42:09.855303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.772 [2024-12-14 16:42:09.855341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.772 [2024-12-14 16:42:09.855348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.772 [2024-12-14 16:42:09.855354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.772 [2024-12-14 16:42:09.855359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.031 [2024-12-14 16:42:09.856910] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.031 [2024-12-14 16:42:09.857018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.031 [2024-12-14 16:42:09.857149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.031 [2024-12-14 16:42:09.857151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:40.031 [2024-12-14 16:42:09.988107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:40.031 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.031 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:40.031 Malloc1 00:28:40.031 [2024-12-14 16:42:10.103319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.289 Malloc2 00:28:40.289 Malloc3 00:28:40.289 Malloc4 00:28:40.289 Malloc5 00:28:40.289 Malloc6 00:28:40.289 Malloc7 00:28:40.548 Malloc8 00:28:40.548 Malloc9 00:28:40.548 Malloc10 00:28:40.548 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.548 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:40.548 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:40.548 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:40.548 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1097541 00:28:40.548 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:40.548 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:40.548 [2024-12-14 16:42:10.604463] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1097467 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1097467 ']' 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1097467 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1097467 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1097467' 00:28:45.829 killing process with pid 1097467 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1097467 00:28:45.829 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1097467 00:28:45.829 [2024-12-14 16:42:15.598046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c2f0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.598113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c2f0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.598122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c2f0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.598128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c2f0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.598469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c7e0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.598499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c7e0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.598507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c7e0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.598514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c7e0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.598520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c7e0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.598526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c7e0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.599061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b930 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.599086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b930 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.599094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b930 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.599100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b930 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.599107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b930 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.599119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b930 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.599125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b930 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.599131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b930 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.599137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b930 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.599143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b930 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.601847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fab0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.601868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fab0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.601875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fab0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.601881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fab0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.601889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fab0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.601896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fab0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.601902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fab0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.601909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fab0 is same with the state(6) to be set 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 starting I/O failed: -6 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 starting I/O failed: -6 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 starting I/O failed: -6 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 starting I/O failed: -6 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 starting I/O failed: -6 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 starting I/O failed: -6 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 starting I/O failed: -6 00:28:45.829 Write completed with error (sct=0, sc=8) 00:28:45.829 [2024-12-14 16:42:15.603323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.829 NVMe io qpair process completion error 00:28:45.829 [2024-12-14 16:42:15.603383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cf00 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.603414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cf00 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.603421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cf00 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.603428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cf00 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.603439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cf00 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.603447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cf00 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.603882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d3d0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.603901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d3d0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.603908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d3d0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.603914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d3d0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.603920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d3d0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.603926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d3d0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.604545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d8c0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.604579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d8c0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.604587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d8c0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.604594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d8c0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.604600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d8c0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.604606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d8c0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.604613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d8c0 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.605103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb80 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.605125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb80 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.605133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb80 is same with the state(6) to be set 00:28:45.829 [2024-12-14 16:42:15.605140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb80 is same with the state(6) to be set 00:28:45.830 [2024-12-14 16:42:15.605146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb80 is same with the state(6) to be set 00:28:45.830 [2024-12-14 16:42:15.605152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb80 is same with the state(6) to be set 00:28:45.830 [2024-12-14 16:42:15.605159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb80 is same with the state(6) to be set 00:28:45.830 [2024-12-14 16:42:15.605165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb80 is same with the state(6) to be set 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 [2024-12-14 16:42:15.609737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fe480 is same with the state(6) to be set 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 [2024-12-14 16:42:15.609765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fe480 is same with the state(6) to be set 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 [2024-12-14 16:42:15.609773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fe480 is same with the state(6) to be set 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 [2024-12-14 16:42:15.610063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.830 starting I/O failed: -6 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 [2024-12-14 16:42:15.611025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.830 Write completed with error (sct=0, sc=8) 00:28:45.830 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 [2024-12-14 16:42:15.613614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.831 NVMe io qpair process completion error 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 [2024-12-14 16:42:15.614535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 Write completed with error (sct=0, sc=8) 00:28:45.831 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 [2024-12-14 16:42:15.615413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 [2024-12-14 16:42:15.616403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.832 Write completed with error (sct=0, sc=8) 00:28:45.832 starting I/O failed: -6 00:28:45.833 [2024-12-14 16:42:15.618143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.833 NVMe io qpair process completion error 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 [2024-12-14 16:42:15.619138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 [2024-12-14 16:42:15.620071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.833 starting I/O failed: -6 00:28:45.833 Write completed with error (sct=0, sc=8) 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 [2024-12-14 16:42:15.621035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 [2024-12-14 16:42:15.622656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.834 NVMe io qpair process completion error 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 starting I/O failed: -6 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.834 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 [2024-12-14 16:42:15.623653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 [2024-12-14 16:42:15.624543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.835 starting I/O failed: -6 00:28:45.835 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 [2024-12-14 16:42:15.625524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 [2024-12-14 16:42:15.627673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.836 NVMe io qpair process completion error 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 Write completed with error (sct=0, sc=8) 00:28:45.836 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 [2024-12-14 16:42:15.630326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.837 starting I/O failed: -6 00:28:45.837 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 [2024-12-14 16:42:15.634858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.838 NVMe io qpair process completion error 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 [2024-12-14 16:42:15.636076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 [2024-12-14 16:42:15.636928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 [2024-12-14 16:42:15.637935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.838 starting I/O failed: -6 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.838 Write completed with error (sct=0, sc=8) 00:28:45.838 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 [2024-12-14 16:42:15.640116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.839 NVMe io qpair process completion error 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 [2024-12-14 16:42:15.641016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 starting I/O failed: -6 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.839 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 [2024-12-14 16:42:15.641909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 [2024-12-14 16:42:15.642957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.840 Write completed with error (sct=0, sc=8) 00:28:45.840 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 [2024-12-14 16:42:15.644521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.841 NVMe io qpair process completion error 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 [2024-12-14 16:42:15.647975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.841 starting I/O failed: -6 00:28:45.841 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 [2024-12-14 16:42:15.648840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 [2024-12-14 16:42:15.649838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.842 Write completed with error (sct=0, sc=8) 00:28:45.842 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 [2024-12-14 16:42:15.656394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.843 NVMe io qpair process completion error 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 [2024-12-14 16:42:15.657437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 [2024-12-14 16:42:15.658222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 Write completed with error (sct=0, sc=8) 00:28:45.843 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 [2024-12-14 16:42:15.659216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 Write completed with error (sct=0, sc=8) 00:28:45.844 starting I/O failed: -6 00:28:45.844 [2024-12-14 16:42:15.661652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.844 NVMe io qpair process completion error 00:28:45.844 Initializing NVMe Controllers 00:28:45.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:45.844 Controller IO queue size 128, less than required. 00:28:45.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:45.844 Controller IO queue size 128, less than required. 00:28:45.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.844 Controller IO queue size 128, less than required. 00:28:45.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:45.844 Controller IO queue size 128, less than required. 00:28:45.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:45.844 Controller IO queue size 128, less than required. 00:28:45.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:45.844 Controller IO queue size 128, less than required. 00:28:45.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:45.844 Controller IO queue size 128, less than required. 00:28:45.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:45.844 Controller IO queue size 128, less than required. 00:28:45.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:45.844 Controller IO queue size 128, less than required. 00:28:45.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:45.844 Controller IO queue size 128, less than required. 00:28:45.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:45.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:45.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:45.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:45.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:45.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:45.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:45.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:45.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:45.844 Initialization complete. Launching workers. 00:28:45.844 ======================================================== 00:28:45.845 Latency(us) 00:28:45.845 Device Information : IOPS MiB/s Average min max 00:28:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2225.15 95.61 57528.75 946.86 97921.74 00:28:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2182.36 93.77 58669.81 890.17 111800.12 00:28:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2186.81 93.96 58562.80 789.97 109591.00 00:28:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2219.85 95.38 57741.31 880.62 106762.28 00:28:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2188.08 94.02 58596.13 806.34 109880.08 00:28:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2214.98 95.17 57847.33 469.77 104987.02 00:28:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2216.25 95.23 57850.03 894.47 111734.23 00:28:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2219.64 95.38 57830.64 743.22 118997.87 00:28:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2186.59 93.96 58077.55 528.37 102632.47 00:28:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2194.22 94.28 57883.62 684.78 102243.87 00:28:45.845 ======================================================== 00:28:45.845 Total : 22033.93 946.77 58056.36 469.77 118997.87 00:28:45.845 00:28:45.845 [2024-12-14 16:42:15.664635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a0370 is same with the state(6) to be set 00:28:45.845 [2024-12-14 16:42:15.664677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a0880 is same with the state(6) to be set 00:28:45.845 [2024-12-14 16:42:15.664706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a5b30 is same with the state(6) to be set 00:28:45.845 [2024-12-14 16:42:15.664735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a0190 is same with the state(6) to be set 00:28:45.845 [2024-12-14 16:42:15.664761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1ff0 is same with the state(6) to be set 00:28:45.845 [2024-12-14 16:42:15.664788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199ffb0 is same with the state(6) to be set 00:28:45.845 [2024-12-14 16:42:15.664814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a2320 is same with the state(6) to be set 00:28:45.845 [2024-12-14 16:42:15.664841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a2650 is same with the state(6) to be set 00:28:45.845 [2024-12-14 16:42:15.664869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a0550 is same with the state(6) to be set 00:28:45.845 [2024-12-14 16:42:15.664896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1cc0 is same with the state(6) to be set 00:28:45.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:46.104 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1097541 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1097541 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1097541 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.042 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.042 rmmod nvme_tcp 00:28:47.042 rmmod nvme_fabrics 00:28:47.042 rmmod nvme_keyring 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1097467 ']' 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1097467 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1097467 ']' 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1097467 00:28:47.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1097467) - No such process 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1097467 is not found' 00:28:47.042 Process with pid 1097467 is not found 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.042 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.626 00:28:49.626 real 0m9.748s 00:28:49.626 user 0m24.723s 00:28:49.626 sys 0m5.265s 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.626 ************************************ 00:28:49.626 END TEST nvmf_shutdown_tc4 00:28:49.626 ************************************ 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:49.626 00:28:49.626 real 0m39.506s 00:28:49.626 user 1m34.446s 00:28:49.626 sys 0m13.850s 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:49.626 ************************************ 00:28:49.626 END TEST nvmf_shutdown 00:28:49.626 ************************************ 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:49.626 ************************************ 00:28:49.626 START TEST nvmf_nsid 00:28:49.626 ************************************ 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:49.626 * Looking for test storage... 00:28:49.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:49.626 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:49.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.627 --rc genhtml_branch_coverage=1 00:28:49.627 --rc genhtml_function_coverage=1 00:28:49.627 --rc genhtml_legend=1 00:28:49.627 --rc geninfo_all_blocks=1 00:28:49.627 --rc geninfo_unexecuted_blocks=1 00:28:49.627 00:28:49.627 ' 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:49.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.627 --rc genhtml_branch_coverage=1 00:28:49.627 --rc genhtml_function_coverage=1 00:28:49.627 --rc genhtml_legend=1 00:28:49.627 --rc geninfo_all_blocks=1 00:28:49.627 --rc geninfo_unexecuted_blocks=1 00:28:49.627 00:28:49.627 ' 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:49.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.627 --rc genhtml_branch_coverage=1 00:28:49.627 --rc genhtml_function_coverage=1 00:28:49.627 --rc genhtml_legend=1 00:28:49.627 --rc geninfo_all_blocks=1 00:28:49.627 --rc geninfo_unexecuted_blocks=1 00:28:49.627 00:28:49.627 ' 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:49.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.627 --rc genhtml_branch_coverage=1 00:28:49.627 --rc genhtml_function_coverage=1 00:28:49.627 --rc genhtml_legend=1 00:28:49.627 --rc geninfo_all_blocks=1 00:28:49.627 --rc geninfo_unexecuted_blocks=1 00:28:49.627 00:28:49.627 ' 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:49.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:49.627 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.628 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.628 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.628 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:49.628 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:49.628 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:28:49.628 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:56.198 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:56.198 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:56.198 Found net devices under 0000:af:00.0: cvl_0_0 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:56.198 Found net devices under 0000:af:00.1: cvl_0_1 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.198 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:56.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:28:56.199 00:28:56.199 --- 10.0.0.2 ping statistics --- 00:28:56.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.199 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:28:56.199 00:28:56.199 --- 10.0.0.1 ping statistics --- 00:28:56.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.199 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1102018 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1102018 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1102018 ']' 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:56.199 [2024-12-14 16:42:25.411224] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:56.199 [2024-12-14 16:42:25.411267] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.199 [2024-12-14 16:42:25.487551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.199 [2024-12-14 16:42:25.509127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.199 [2024-12-14 16:42:25.509164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.199 [2024-12-14 16:42:25.509171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.199 [2024-12-14 16:42:25.509177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.199 [2024-12-14 16:42:25.509182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.199 [2024-12-14 16:42:25.509686] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1102146 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=34e1d948-46a7-424f-96bd-6c11302f2a73 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=ef5550bc-4088-4307-b18b-a268bf310de4 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e34b7d62-f331-4fa6-9e67-36348eb68438 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:56.199 null0 00:28:56.199 null1 00:28:56.199 [2024-12-14 16:42:25.683667] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:56.199 [2024-12-14 16:42:25.683713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1102146 ] 00:28:56.199 null2 00:28:56.199 [2024-12-14 16:42:25.692357] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.199 [2024-12-14 16:42:25.716548] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1102146 /var/tmp/tgt2.sock 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1102146 ']' 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:56.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:56.199 [2024-12-14 16:42:25.759754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.199 [2024-12-14 16:42:25.782017] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:56.199 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:56.199 [2024-12-14 16:42:26.280479] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.458 [2024-12-14 16:42:26.296569] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:28:56.458 nvme0n1 nvme0n2 00:28:56.458 nvme1n1 00:28:56.458 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:56.458 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:56.458 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:57.393 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:57.393 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:57.393 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:57.393 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:57.393 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:57.393 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:57.393 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:57.393 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:57.393 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:57.393 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:57.393 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:28:57.394 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:28:57.394 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 34e1d948-46a7-424f-96bd-6c11302f2a73 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=34e1d94846a7424f96bd6c11302f2a73 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 34E1D94846A7424F96BD6C11302F2A73 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 34E1D94846A7424F96BD6C11302F2A73 == \3\4\E\1\D\9\4\8\4\6\A\7\4\2\4\F\9\6\B\D\6\C\1\1\3\0\2\F\2\A\7\3 ]] 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:58.406 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:58.664 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:58.664 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:58.664 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:58.664 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid ef5550bc-4088-4307-b18b-a268bf310de4 00:28:58.664 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:58.664 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:58.664 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:58.664 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ef5550bc40884307b18ba268bf310de4 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EF5550BC40884307B18BA268BF310DE4 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ EF5550BC40884307B18BA268BF310DE4 == \E\F\5\5\5\0\B\C\4\0\8\8\4\3\0\7\B\1\8\B\A\2\6\8\B\F\3\1\0\D\E\4 ]] 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e34b7d62-f331-4fa6-9e67-36348eb68438 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e34b7d62f3314fa69e6736348eb68438 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E34B7D62F3314FA69E6736348EB68438 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E34B7D62F3314FA69E6736348EB68438 == \E\3\4\B\7\D\6\2\F\3\3\1\4\F\A\6\9\E\6\7\3\6\3\4\8\E\B\6\8\4\3\8 ]] 00:28:58.665 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1102146 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1102146 ']' 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1102146 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1102146 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1102146' 00:28:58.923 killing process with pid 1102146 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1102146 00:28:58.923 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1102146 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.182 rmmod nvme_tcp 00:28:59.182 rmmod nvme_fabrics 00:28:59.182 rmmod nvme_keyring 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1102018 ']' 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1102018 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1102018 ']' 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1102018 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.182 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1102018 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1102018' 00:28:59.442 killing process with pid 1102018 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1102018 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1102018 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.442 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.978 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.978 00:29:01.978 real 0m12.258s 00:29:01.978 user 0m9.462s 00:29:01.978 sys 0m5.488s 00:29:01.978 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.978 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:01.978 ************************************ 00:29:01.978 END TEST nvmf_nsid 00:29:01.978 ************************************ 00:29:01.978 16:42:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:01.978 00:29:01.978 real 18m32.037s 00:29:01.978 user 49m5.438s 00:29:01.978 sys 4m43.238s 00:29:01.978 16:42:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.978 16:42:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:01.978 ************************************ 00:29:01.978 END TEST nvmf_target_extra 00:29:01.978 ************************************ 00:29:01.978 16:42:31 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:01.978 16:42:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:01.978 16:42:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.978 16:42:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.978 ************************************ 00:29:01.978 START TEST nvmf_host 00:29:01.978 ************************************ 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:01.978 * Looking for test storage... 00:29:01.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:01.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.978 --rc genhtml_branch_coverage=1 00:29:01.978 --rc genhtml_function_coverage=1 00:29:01.978 --rc genhtml_legend=1 00:29:01.978 --rc geninfo_all_blocks=1 00:29:01.978 --rc geninfo_unexecuted_blocks=1 00:29:01.978 00:29:01.978 ' 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:01.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.978 --rc genhtml_branch_coverage=1 00:29:01.978 --rc genhtml_function_coverage=1 00:29:01.978 --rc genhtml_legend=1 00:29:01.978 --rc geninfo_all_blocks=1 00:29:01.978 --rc geninfo_unexecuted_blocks=1 00:29:01.978 00:29:01.978 ' 00:29:01.978 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:01.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.978 --rc genhtml_branch_coverage=1 00:29:01.978 --rc genhtml_function_coverage=1 00:29:01.978 --rc genhtml_legend=1 00:29:01.978 --rc geninfo_all_blocks=1 00:29:01.979 --rc geninfo_unexecuted_blocks=1 00:29:01.979 00:29:01.979 ' 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:01.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.979 --rc genhtml_branch_coverage=1 00:29:01.979 --rc genhtml_function_coverage=1 00:29:01.979 --rc genhtml_legend=1 00:29:01.979 --rc geninfo_all_blocks=1 00:29:01.979 --rc geninfo_unexecuted_blocks=1 00:29:01.979 00:29:01.979 ' 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:01.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.979 ************************************ 00:29:01.979 START TEST nvmf_multicontroller 00:29:01.979 ************************************ 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:01.979 * Looking for test storage... 00:29:01.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.979 16:42:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:01.979 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:01.979 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.979 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:01.979 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.979 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.979 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.979 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:01.979 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.979 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:01.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.979 --rc genhtml_branch_coverage=1 00:29:01.979 --rc genhtml_function_coverage=1 00:29:01.979 --rc genhtml_legend=1 00:29:01.979 --rc geninfo_all_blocks=1 00:29:01.979 --rc geninfo_unexecuted_blocks=1 00:29:01.979 00:29:01.979 ' 00:29:01.979 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:01.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.979 --rc genhtml_branch_coverage=1 00:29:01.979 --rc genhtml_function_coverage=1 00:29:01.979 --rc genhtml_legend=1 00:29:01.979 --rc geninfo_all_blocks=1 00:29:01.979 --rc geninfo_unexecuted_blocks=1 00:29:01.979 00:29:01.979 ' 00:29:01.979 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:01.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.980 --rc genhtml_branch_coverage=1 00:29:01.980 --rc genhtml_function_coverage=1 00:29:01.980 --rc genhtml_legend=1 00:29:01.980 --rc geninfo_all_blocks=1 00:29:01.980 --rc geninfo_unexecuted_blocks=1 00:29:01.980 00:29:01.980 ' 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:01.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.980 --rc genhtml_branch_coverage=1 00:29:01.980 --rc genhtml_function_coverage=1 00:29:01.980 --rc genhtml_legend=1 00:29:01.980 --rc geninfo_all_blocks=1 00:29:01.980 --rc geninfo_unexecuted_blocks=1 00:29:01.980 00:29:01.980 ' 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:01.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:01.980 16:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:08.551 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:08.552 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:08.552 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:08.552 Found net devices under 0000:af:00.0: cvl_0_0 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:08.552 Found net devices under 0000:af:00.1: cvl_0_1 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:08.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:29:08.552 00:29:08.552 --- 10.0.0.2 ping statistics --- 00:29:08.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.552 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:29:08.552 00:29:08.552 --- 10.0.0.1 ping statistics --- 00:29:08.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.552 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:08.552 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1106181 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1106181 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1106181 ']' 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.553 16:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 [2024-12-14 16:42:38.010848] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:08.553 [2024-12-14 16:42:38.010890] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.553 [2024-12-14 16:42:38.087438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:08.553 [2024-12-14 16:42:38.109857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.553 [2024-12-14 16:42:38.109892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.553 [2024-12-14 16:42:38.109899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.553 [2024-12-14 16:42:38.109905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.553 [2024-12-14 16:42:38.109910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.553 [2024-12-14 16:42:38.111186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.553 [2024-12-14 16:42:38.111292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.553 [2024-12-14 16:42:38.111294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 [2024-12-14 16:42:38.241803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 Malloc0 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 [2024-12-14 16:42:38.304169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 [2024-12-14 16:42:38.316113] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 Malloc1 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1106381 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1106381 /var/tmp/bdevperf.sock 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1106381 ']' 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:08.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.553 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.554 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.554 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:08.554 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:08.554 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.554 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.813 NVMe0n1 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.813 1 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.813 request: 00:29:08.813 { 00:29:08.813 "name": "NVMe0", 00:29:08.813 "trtype": "tcp", 00:29:08.813 "traddr": "10.0.0.2", 00:29:08.813 "adrfam": "ipv4", 00:29:08.813 "trsvcid": "4420", 00:29:08.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.813 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:08.813 "hostaddr": "10.0.0.1", 00:29:08.813 "prchk_reftag": false, 00:29:08.813 "prchk_guard": false, 00:29:08.813 "hdgst": false, 00:29:08.813 "ddgst": false, 00:29:08.813 "allow_unrecognized_csi": false, 00:29:08.813 "method": "bdev_nvme_attach_controller", 00:29:08.813 "req_id": 1 00:29:08.813 } 00:29:08.813 Got JSON-RPC error response 00:29:08.813 response: 00:29:08.813 { 00:29:08.813 "code": -114, 00:29:08.813 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:08.813 } 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.813 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.814 request: 00:29:08.814 { 00:29:08.814 "name": "NVMe0", 00:29:08.814 "trtype": "tcp", 00:29:08.814 "traddr": "10.0.0.2", 00:29:08.814 "adrfam": "ipv4", 00:29:08.814 "trsvcid": "4420", 00:29:08.814 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:08.814 "hostaddr": "10.0.0.1", 00:29:08.814 "prchk_reftag": false, 00:29:08.814 "prchk_guard": false, 00:29:08.814 "hdgst": false, 00:29:08.814 "ddgst": false, 00:29:08.814 "allow_unrecognized_csi": false, 00:29:08.814 "method": "bdev_nvme_attach_controller", 00:29:08.814 "req_id": 1 00:29:08.814 } 00:29:08.814 Got JSON-RPC error response 00:29:08.814 response: 00:29:08.814 { 00:29:08.814 "code": -114, 00:29:08.814 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:08.814 } 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.814 request: 00:29:08.814 { 00:29:08.814 "name": "NVMe0", 00:29:08.814 "trtype": "tcp", 00:29:08.814 "traddr": "10.0.0.2", 00:29:08.814 "adrfam": "ipv4", 00:29:08.814 "trsvcid": "4420", 00:29:08.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.814 "hostaddr": "10.0.0.1", 00:29:08.814 "prchk_reftag": false, 00:29:08.814 "prchk_guard": false, 00:29:08.814 "hdgst": false, 00:29:08.814 "ddgst": false, 00:29:08.814 "multipath": "disable", 00:29:08.814 "allow_unrecognized_csi": false, 00:29:08.814 "method": "bdev_nvme_attach_controller", 00:29:08.814 "req_id": 1 00:29:08.814 } 00:29:08.814 Got JSON-RPC error response 00:29:08.814 response: 00:29:08.814 { 00:29:08.814 "code": -114, 00:29:08.814 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:08.814 } 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.814 request: 00:29:08.814 { 00:29:08.814 "name": "NVMe0", 00:29:08.814 "trtype": "tcp", 00:29:08.814 "traddr": "10.0.0.2", 00:29:08.814 "adrfam": "ipv4", 00:29:08.814 "trsvcid": "4420", 00:29:08.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.814 "hostaddr": "10.0.0.1", 00:29:08.814 "prchk_reftag": false, 00:29:08.814 "prchk_guard": false, 00:29:08.814 "hdgst": false, 00:29:08.814 "ddgst": false, 00:29:08.814 "multipath": "failover", 00:29:08.814 "allow_unrecognized_csi": false, 00:29:08.814 "method": "bdev_nvme_attach_controller", 00:29:08.814 "req_id": 1 00:29:08.814 } 00:29:08.814 Got JSON-RPC error response 00:29:08.814 response: 00:29:08.814 { 00:29:08.814 "code": -114, 00:29:08.814 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:08.814 } 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.814 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.073 NVMe0n1 00:29:09.073 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.073 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:09.074 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.074 16:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.074 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:09.074 16:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:10.451 { 00:29:10.451 "results": [ 00:29:10.451 { 00:29:10.451 "job": "NVMe0n1", 00:29:10.451 "core_mask": "0x1", 00:29:10.451 "workload": "write", 00:29:10.451 "status": "finished", 00:29:10.451 "queue_depth": 128, 00:29:10.451 "io_size": 4096, 00:29:10.451 "runtime": 1.004109, 00:29:10.451 "iops": 25242.2794736428, 00:29:10.451 "mibps": 98.60265419391719, 00:29:10.451 "io_failed": 0, 00:29:10.451 "io_timeout": 0, 00:29:10.451 "avg_latency_us": 5065.275391176592, 00:29:10.451 "min_latency_us": 1451.1542857142856, 00:29:10.451 "max_latency_us": 10111.26857142857 00:29:10.451 } 00:29:10.451 ], 00:29:10.451 "core_count": 1 00:29:10.451 } 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1106381 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1106381 ']' 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1106381 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1106381 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1106381' 00:29:10.451 killing process with pid 1106381 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1106381 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1106381 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:10.451 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:10.451 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:10.451 [2024-12-14 16:42:38.419408] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:10.451 [2024-12-14 16:42:38.419460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1106381 ] 00:29:10.451 [2024-12-14 16:42:38.491744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.451 [2024-12-14 16:42:38.514676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.451 [2024-12-14 16:42:39.121800] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name df882fc2-70e4-4f8d-9321-60c77549428c already exists 00:29:10.451 [2024-12-14 16:42:39.121827] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:df882fc2-70e4-4f8d-9321-60c77549428c alias for bdev NVMe1n1 00:29:10.451 [2024-12-14 16:42:39.121835] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:10.451 Running I/O for 1 seconds... 00:29:10.451 25218.00 IOPS, 98.51 MiB/s 00:29:10.451 Latency(us) 00:29:10.451 [2024-12-14T15:42:40.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.451 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:10.451 NVMe0n1 : 1.00 25242.28 98.60 0.00 0.00 5065.28 1451.15 10111.27 00:29:10.451 [2024-12-14T15:42:40.537Z] =================================================================================================================== 00:29:10.451 [2024-12-14T15:42:40.537Z] Total : 25242.28 98.60 0.00 0.00 5065.28 1451.15 10111.27 00:29:10.451 Received shutdown signal, test time was about 1.000000 seconds 00:29:10.451 00:29:10.451 Latency(us) 00:29:10.451 [2024-12-14T15:42:40.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.451 [2024-12-14T15:42:40.537Z] =================================================================================================================== 00:29:10.451 [2024-12-14T15:42:40.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.452 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:10.452 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:10.452 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:10.452 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:10.452 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:10.452 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:10.452 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:10.452 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:10.452 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:10.452 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:10.711 rmmod nvme_tcp 00:29:10.711 rmmod nvme_fabrics 00:29:10.711 rmmod nvme_keyring 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1106181 ']' 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1106181 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1106181 ']' 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1106181 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1106181 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1106181' 00:29:10.711 killing process with pid 1106181 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1106181 00:29:10.711 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1106181 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.971 16:42:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.874 16:42:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:12.875 00:29:12.875 real 0m11.087s 00:29:12.875 user 0m12.252s 00:29:12.875 sys 0m5.056s 00:29:12.875 16:42:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:12.875 16:42:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.875 ************************************ 00:29:12.875 END TEST nvmf_multicontroller 00:29:12.875 ************************************ 00:29:13.134 16:42:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:13.134 16:42:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:13.134 16:42:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:13.134 16:42:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.134 ************************************ 00:29:13.134 START TEST nvmf_aer 00:29:13.134 ************************************ 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:13.134 * Looking for test storage... 00:29:13.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:13.134 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:13.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.135 --rc genhtml_branch_coverage=1 00:29:13.135 --rc genhtml_function_coverage=1 00:29:13.135 --rc genhtml_legend=1 00:29:13.135 --rc geninfo_all_blocks=1 00:29:13.135 --rc geninfo_unexecuted_blocks=1 00:29:13.135 00:29:13.135 ' 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:13.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.135 --rc genhtml_branch_coverage=1 00:29:13.135 --rc genhtml_function_coverage=1 00:29:13.135 --rc genhtml_legend=1 00:29:13.135 --rc geninfo_all_blocks=1 00:29:13.135 --rc geninfo_unexecuted_blocks=1 00:29:13.135 00:29:13.135 ' 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:13.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.135 --rc genhtml_branch_coverage=1 00:29:13.135 --rc genhtml_function_coverage=1 00:29:13.135 --rc genhtml_legend=1 00:29:13.135 --rc geninfo_all_blocks=1 00:29:13.135 --rc geninfo_unexecuted_blocks=1 00:29:13.135 00:29:13.135 ' 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:13.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.135 --rc genhtml_branch_coverage=1 00:29:13.135 --rc genhtml_function_coverage=1 00:29:13.135 --rc genhtml_legend=1 00:29:13.135 --rc geninfo_all_blocks=1 00:29:13.135 --rc geninfo_unexecuted_blocks=1 00:29:13.135 00:29:13.135 ' 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:13.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.135 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:13.395 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:13.395 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:13.395 16:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:19.966 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:19.966 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:19.966 Found net devices under 0000:af:00.0: cvl_0_0 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:19.966 Found net devices under 0000:af:00.1: cvl_0_1 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:19.966 16:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.966 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.966 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.966 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.966 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:29:19.966 00:29:19.966 --- 10.0.0.2 ping statistics --- 00:29:19.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.966 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:29:19.966 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:29:19.966 00:29:19.966 --- 10.0.0.1 ping statistics --- 00:29:19.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.966 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:19.966 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.966 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:19.966 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1110116 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1110116 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1110116 ']' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 [2024-12-14 16:42:49.142536] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:19.967 [2024-12-14 16:42:49.142584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.967 [2024-12-14 16:42:49.223919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:19.967 [2024-12-14 16:42:49.247309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.967 [2024-12-14 16:42:49.247351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.967 [2024-12-14 16:42:49.247357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.967 [2024-12-14 16:42:49.247363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.967 [2024-12-14 16:42:49.247368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.967 [2024-12-14 16:42:49.248791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.967 [2024-12-14 16:42:49.248899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.967 [2024-12-14 16:42:49.248915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.967 [2024-12-14 16:42:49.248925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 [2024-12-14 16:42:49.389814] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 Malloc0 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 [2024-12-14 16:42:49.451404] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 [ 00:29:19.967 { 00:29:19.967 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:19.967 "subtype": "Discovery", 00:29:19.967 "listen_addresses": [], 00:29:19.967 "allow_any_host": true, 00:29:19.967 "hosts": [] 00:29:19.967 }, 00:29:19.967 { 00:29:19.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:19.967 "subtype": "NVMe", 00:29:19.967 "listen_addresses": [ 00:29:19.967 { 00:29:19.967 "trtype": "TCP", 00:29:19.967 "adrfam": "IPv4", 00:29:19.967 "traddr": "10.0.0.2", 00:29:19.967 "trsvcid": "4420" 00:29:19.967 } 00:29:19.967 ], 00:29:19.967 "allow_any_host": true, 00:29:19.967 "hosts": [], 00:29:19.967 "serial_number": "SPDK00000000000001", 00:29:19.967 "model_number": "SPDK bdev Controller", 00:29:19.967 "max_namespaces": 2, 00:29:19.967 "min_cntlid": 1, 00:29:19.967 "max_cntlid": 65519, 00:29:19.967 "namespaces": [ 00:29:19.967 { 00:29:19.967 "nsid": 1, 00:29:19.967 "bdev_name": "Malloc0", 00:29:19.967 "name": "Malloc0", 00:29:19.967 "nguid": "2787178FC58E4ED49C7095D325744AF4", 00:29:19.967 "uuid": "2787178f-c58e-4ed4-9c70-95d325744af4" 00:29:19.967 } 00:29:19.967 ] 00:29:19.967 } 00:29:19.967 ] 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1110167 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 Malloc1 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.967 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.967 Asynchronous Event Request test 00:29:19.967 Attaching to 10.0.0.2 00:29:19.967 Attached to 10.0.0.2 00:29:19.967 Registering asynchronous event callbacks... 00:29:19.967 Starting namespace attribute notice tests for all controllers... 00:29:19.967 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:19.967 aer_cb - Changed Namespace 00:29:19.967 Cleaning up... 00:29:19.967 [ 00:29:19.967 { 00:29:19.967 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:19.967 "subtype": "Discovery", 00:29:19.967 "listen_addresses": [], 00:29:19.968 "allow_any_host": true, 00:29:19.968 "hosts": [] 00:29:19.968 }, 00:29:19.968 { 00:29:19.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:19.968 "subtype": "NVMe", 00:29:19.968 "listen_addresses": [ 00:29:19.968 { 00:29:19.968 "trtype": "TCP", 00:29:19.968 "adrfam": "IPv4", 00:29:19.968 "traddr": "10.0.0.2", 00:29:19.968 "trsvcid": "4420" 00:29:19.968 } 00:29:19.968 ], 00:29:19.968 "allow_any_host": true, 00:29:19.968 "hosts": [], 00:29:19.968 "serial_number": "SPDK00000000000001", 00:29:19.968 "model_number": "SPDK bdev Controller", 00:29:19.968 "max_namespaces": 2, 00:29:19.968 "min_cntlid": 1, 00:29:19.968 "max_cntlid": 65519, 00:29:19.968 "namespaces": [ 00:29:19.968 { 00:29:19.968 "nsid": 1, 00:29:19.968 "bdev_name": "Malloc0", 00:29:19.968 "name": "Malloc0", 00:29:19.968 "nguid": "2787178FC58E4ED49C7095D325744AF4", 00:29:19.968 "uuid": "2787178f-c58e-4ed4-9c70-95d325744af4" 00:29:19.968 }, 00:29:19.968 { 00:29:19.968 "nsid": 2, 00:29:19.968 "bdev_name": "Malloc1", 00:29:19.968 "name": "Malloc1", 00:29:19.968 "nguid": "DE7B55241A2F411FBC0A2DEDE95E59E2", 00:29:19.968 "uuid": "de7b5524-1a2f-411f-bc0a-2dede95e59e2" 00:29:19.968 } 00:29:19.968 ] 00:29:19.968 } 00:29:19.968 ] 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1110167 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:19.968 rmmod nvme_tcp 00:29:19.968 rmmod nvme_fabrics 00:29:19.968 rmmod nvme_keyring 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1110116 ']' 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1110116 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1110116 ']' 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1110116 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.968 16:42:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1110116 00:29:19.968 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:19.968 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:19.968 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1110116' 00:29:19.968 killing process with pid 1110116 00:29:19.968 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1110116 00:29:19.968 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1110116 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.227 16:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:22.764 00:29:22.764 real 0m9.260s 00:29:22.764 user 0m5.411s 00:29:22.764 sys 0m4.916s 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:22.764 ************************************ 00:29:22.764 END TEST nvmf_aer 00:29:22.764 ************************************ 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.764 ************************************ 00:29:22.764 START TEST nvmf_async_init 00:29:22.764 ************************************ 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:22.764 * Looking for test storage... 00:29:22.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.764 --rc genhtml_branch_coverage=1 00:29:22.764 --rc genhtml_function_coverage=1 00:29:22.764 --rc genhtml_legend=1 00:29:22.764 --rc geninfo_all_blocks=1 00:29:22.764 --rc geninfo_unexecuted_blocks=1 00:29:22.764 00:29:22.764 ' 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.764 --rc genhtml_branch_coverage=1 00:29:22.764 --rc genhtml_function_coverage=1 00:29:22.764 --rc genhtml_legend=1 00:29:22.764 --rc geninfo_all_blocks=1 00:29:22.764 --rc geninfo_unexecuted_blocks=1 00:29:22.764 00:29:22.764 ' 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.764 --rc genhtml_branch_coverage=1 00:29:22.764 --rc genhtml_function_coverage=1 00:29:22.764 --rc genhtml_legend=1 00:29:22.764 --rc geninfo_all_blocks=1 00:29:22.764 --rc geninfo_unexecuted_blocks=1 00:29:22.764 00:29:22.764 ' 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.764 --rc genhtml_branch_coverage=1 00:29:22.764 --rc genhtml_function_coverage=1 00:29:22.764 --rc genhtml_legend=1 00:29:22.764 --rc geninfo_all_blocks=1 00:29:22.764 --rc geninfo_unexecuted_blocks=1 00:29:22.764 00:29:22.764 ' 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.764 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:22.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=66dc85ac37094c66a6fe99034d6ce269 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:22.765 16:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:28.038 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.038 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:28.298 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:28.298 Found net devices under 0000:af:00.0: cvl_0_0 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:28.298 Found net devices under 0000:af:00.1: cvl_0_1 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.298 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:28.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:29:28.299 00:29:28.299 --- 10.0.0.2 ping statistics --- 00:29:28.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.299 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:29:28.299 00:29:28.299 --- 10.0.0.1 ping statistics --- 00:29:28.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.299 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:28.299 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1113831 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1113831 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1113831 ']' 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.558 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.558 [2024-12-14 16:42:58.474295] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:28.558 [2024-12-14 16:42:58.474344] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.558 [2024-12-14 16:42:58.552506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.558 [2024-12-14 16:42:58.574475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.558 [2024-12-14 16:42:58.574513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.558 [2024-12-14 16:42:58.574520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.558 [2024-12-14 16:42:58.574530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.558 [2024-12-14 16:42:58.574535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.558 [2024-12-14 16:42:58.575023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.817 [2024-12-14 16:42:58.702364] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.817 null0 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 66dc85ac37094c66a6fe99034d6ce269 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.817 [2024-12-14 16:42:58.746603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.817 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.076 nvme0n1 00:29:29.076 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.076 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:29.076 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.076 16:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.076 [ 00:29:29.076 { 00:29:29.076 "name": "nvme0n1", 00:29:29.076 "aliases": [ 00:29:29.076 "66dc85ac-3709-4c66-a6fe-99034d6ce269" 00:29:29.076 ], 00:29:29.076 "product_name": "NVMe disk", 00:29:29.076 "block_size": 512, 00:29:29.076 "num_blocks": 2097152, 00:29:29.076 "uuid": "66dc85ac-3709-4c66-a6fe-99034d6ce269", 00:29:29.076 "numa_id": 1, 00:29:29.076 "assigned_rate_limits": { 00:29:29.076 "rw_ios_per_sec": 0, 00:29:29.076 "rw_mbytes_per_sec": 0, 00:29:29.076 "r_mbytes_per_sec": 0, 00:29:29.076 "w_mbytes_per_sec": 0 00:29:29.076 }, 00:29:29.076 "claimed": false, 00:29:29.076 "zoned": false, 00:29:29.076 "supported_io_types": { 00:29:29.076 "read": true, 00:29:29.076 "write": true, 00:29:29.076 "unmap": false, 00:29:29.076 "flush": true, 00:29:29.076 "reset": true, 00:29:29.076 "nvme_admin": true, 00:29:29.076 "nvme_io": true, 00:29:29.076 "nvme_io_md": false, 00:29:29.076 "write_zeroes": true, 00:29:29.076 "zcopy": false, 00:29:29.076 "get_zone_info": false, 00:29:29.076 "zone_management": false, 00:29:29.076 "zone_append": false, 00:29:29.076 "compare": true, 00:29:29.076 "compare_and_write": true, 00:29:29.076 "abort": true, 00:29:29.076 "seek_hole": false, 00:29:29.076 "seek_data": false, 00:29:29.076 "copy": true, 00:29:29.076 "nvme_iov_md": false 00:29:29.076 }, 00:29:29.076 "memory_domains": [ 00:29:29.076 { 00:29:29.076 "dma_device_id": "system", 00:29:29.076 "dma_device_type": 1 00:29:29.076 } 00:29:29.076 ], 00:29:29.076 "driver_specific": { 00:29:29.076 "nvme": [ 00:29:29.076 { 00:29:29.076 "trid": { 00:29:29.076 "trtype": "TCP", 00:29:29.076 "adrfam": "IPv4", 00:29:29.076 "traddr": "10.0.0.2", 00:29:29.076 "trsvcid": "4420", 00:29:29.076 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:29.076 }, 00:29:29.076 "ctrlr_data": { 00:29:29.076 "cntlid": 1, 00:29:29.076 "vendor_id": "0x8086", 00:29:29.076 "model_number": "SPDK bdev Controller", 00:29:29.076 "serial_number": "00000000000000000000", 00:29:29.076 "firmware_revision": "25.01", 00:29:29.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.076 "oacs": { 00:29:29.076 "security": 0, 00:29:29.076 "format": 0, 00:29:29.076 "firmware": 0, 00:29:29.076 "ns_manage": 0 00:29:29.076 }, 00:29:29.076 "multi_ctrlr": true, 00:29:29.076 "ana_reporting": false 00:29:29.076 }, 00:29:29.076 "vs": { 00:29:29.076 "nvme_version": "1.3" 00:29:29.076 }, 00:29:29.076 "ns_data": { 00:29:29.076 "id": 1, 00:29:29.076 "can_share": true 00:29:29.076 } 00:29:29.076 } 00:29:29.076 ], 00:29:29.077 "mp_policy": "active_passive" 00:29:29.077 } 00:29:29.077 } 00:29:29.077 ] 00:29:29.077 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.077 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:29.077 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.077 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.077 [2024-12-14 16:42:59.012083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:29.077 [2024-12-14 16:42:59.012138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2434a90 (9): Bad file descriptor 00:29:29.077 [2024-12-14 16:42:59.143635] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:29.077 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.077 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:29.077 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.077 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.077 [ 00:29:29.077 { 00:29:29.077 "name": "nvme0n1", 00:29:29.077 "aliases": [ 00:29:29.077 "66dc85ac-3709-4c66-a6fe-99034d6ce269" 00:29:29.077 ], 00:29:29.077 "product_name": "NVMe disk", 00:29:29.077 "block_size": 512, 00:29:29.077 "num_blocks": 2097152, 00:29:29.077 "uuid": "66dc85ac-3709-4c66-a6fe-99034d6ce269", 00:29:29.077 "numa_id": 1, 00:29:29.077 "assigned_rate_limits": { 00:29:29.077 "rw_ios_per_sec": 0, 00:29:29.077 "rw_mbytes_per_sec": 0, 00:29:29.077 "r_mbytes_per_sec": 0, 00:29:29.077 "w_mbytes_per_sec": 0 00:29:29.077 }, 00:29:29.077 "claimed": false, 00:29:29.077 "zoned": false, 00:29:29.077 "supported_io_types": { 00:29:29.077 "read": true, 00:29:29.077 "write": true, 00:29:29.077 "unmap": false, 00:29:29.077 "flush": true, 00:29:29.077 "reset": true, 00:29:29.077 "nvme_admin": true, 00:29:29.077 "nvme_io": true, 00:29:29.077 "nvme_io_md": false, 00:29:29.077 "write_zeroes": true, 00:29:29.077 "zcopy": false, 00:29:29.077 "get_zone_info": false, 00:29:29.077 "zone_management": false, 00:29:29.077 "zone_append": false, 00:29:29.077 "compare": true, 00:29:29.077 "compare_and_write": true, 00:29:29.077 "abort": true, 00:29:29.077 "seek_hole": false, 00:29:29.077 "seek_data": false, 00:29:29.077 "copy": true, 00:29:29.077 "nvme_iov_md": false 00:29:29.077 }, 00:29:29.077 "memory_domains": [ 00:29:29.077 { 00:29:29.077 "dma_device_id": "system", 00:29:29.077 "dma_device_type": 1 00:29:29.077 } 00:29:29.077 ], 00:29:29.077 "driver_specific": { 00:29:29.077 "nvme": [ 00:29:29.077 { 00:29:29.077 "trid": { 00:29:29.077 "trtype": "TCP", 00:29:29.077 "adrfam": "IPv4", 00:29:29.077 "traddr": "10.0.0.2", 00:29:29.077 "trsvcid": "4420", 00:29:29.077 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:29.077 }, 00:29:29.077 "ctrlr_data": { 00:29:29.077 "cntlid": 2, 00:29:29.077 "vendor_id": "0x8086", 00:29:29.077 "model_number": "SPDK bdev Controller", 00:29:29.077 "serial_number": "00000000000000000000", 00:29:29.336 "firmware_revision": "25.01", 00:29:29.336 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.336 "oacs": { 00:29:29.336 "security": 0, 00:29:29.336 "format": 0, 00:29:29.336 "firmware": 0, 00:29:29.336 "ns_manage": 0 00:29:29.336 }, 00:29:29.336 "multi_ctrlr": true, 00:29:29.336 "ana_reporting": false 00:29:29.336 }, 00:29:29.336 "vs": { 00:29:29.336 "nvme_version": "1.3" 00:29:29.336 }, 00:29:29.336 "ns_data": { 00:29:29.336 "id": 1, 00:29:29.336 "can_share": true 00:29:29.336 } 00:29:29.336 } 00:29:29.336 ], 00:29:29.336 "mp_policy": "active_passive" 00:29:29.336 } 00:29:29.336 } 00:29:29.336 ] 00:29:29.336 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.336 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.336 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.336 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.336 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.336 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:29.336 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.BdBHgrxaTQ 00:29:29.336 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:29.336 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.BdBHgrxaTQ 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.BdBHgrxaTQ 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.337 [2024-12-14 16:42:59.216702] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:29.337 [2024-12-14 16:42:59.216790] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.337 [2024-12-14 16:42:59.236770] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:29.337 nvme0n1 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.337 [ 00:29:29.337 { 00:29:29.337 "name": "nvme0n1", 00:29:29.337 "aliases": [ 00:29:29.337 "66dc85ac-3709-4c66-a6fe-99034d6ce269" 00:29:29.337 ], 00:29:29.337 "product_name": "NVMe disk", 00:29:29.337 "block_size": 512, 00:29:29.337 "num_blocks": 2097152, 00:29:29.337 "uuid": "66dc85ac-3709-4c66-a6fe-99034d6ce269", 00:29:29.337 "numa_id": 1, 00:29:29.337 "assigned_rate_limits": { 00:29:29.337 "rw_ios_per_sec": 0, 00:29:29.337 "rw_mbytes_per_sec": 0, 00:29:29.337 "r_mbytes_per_sec": 0, 00:29:29.337 "w_mbytes_per_sec": 0 00:29:29.337 }, 00:29:29.337 "claimed": false, 00:29:29.337 "zoned": false, 00:29:29.337 "supported_io_types": { 00:29:29.337 "read": true, 00:29:29.337 "write": true, 00:29:29.337 "unmap": false, 00:29:29.337 "flush": true, 00:29:29.337 "reset": true, 00:29:29.337 "nvme_admin": true, 00:29:29.337 "nvme_io": true, 00:29:29.337 "nvme_io_md": false, 00:29:29.337 "write_zeroes": true, 00:29:29.337 "zcopy": false, 00:29:29.337 "get_zone_info": false, 00:29:29.337 "zone_management": false, 00:29:29.337 "zone_append": false, 00:29:29.337 "compare": true, 00:29:29.337 "compare_and_write": true, 00:29:29.337 "abort": true, 00:29:29.337 "seek_hole": false, 00:29:29.337 "seek_data": false, 00:29:29.337 "copy": true, 00:29:29.337 "nvme_iov_md": false 00:29:29.337 }, 00:29:29.337 "memory_domains": [ 00:29:29.337 { 00:29:29.337 "dma_device_id": "system", 00:29:29.337 "dma_device_type": 1 00:29:29.337 } 00:29:29.337 ], 00:29:29.337 "driver_specific": { 00:29:29.337 "nvme": [ 00:29:29.337 { 00:29:29.337 "trid": { 00:29:29.337 "trtype": "TCP", 00:29:29.337 "adrfam": "IPv4", 00:29:29.337 "traddr": "10.0.0.2", 00:29:29.337 "trsvcid": "4421", 00:29:29.337 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:29.337 }, 00:29:29.337 "ctrlr_data": { 00:29:29.337 "cntlid": 3, 00:29:29.337 "vendor_id": "0x8086", 00:29:29.337 "model_number": "SPDK bdev Controller", 00:29:29.337 "serial_number": "00000000000000000000", 00:29:29.337 "firmware_revision": "25.01", 00:29:29.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.337 "oacs": { 00:29:29.337 "security": 0, 00:29:29.337 "format": 0, 00:29:29.337 "firmware": 0, 00:29:29.337 "ns_manage": 0 00:29:29.337 }, 00:29:29.337 "multi_ctrlr": true, 00:29:29.337 "ana_reporting": false 00:29:29.337 }, 00:29:29.337 "vs": { 00:29:29.337 "nvme_version": "1.3" 00:29:29.337 }, 00:29:29.337 "ns_data": { 00:29:29.337 "id": 1, 00:29:29.337 "can_share": true 00:29:29.337 } 00:29:29.337 } 00:29:29.337 ], 00:29:29.337 "mp_policy": "active_passive" 00:29:29.337 } 00:29:29.337 } 00:29:29.337 ] 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.BdBHgrxaTQ 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.337 rmmod nvme_tcp 00:29:29.337 rmmod nvme_fabrics 00:29:29.337 rmmod nvme_keyring 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1113831 ']' 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1113831 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1113831 ']' 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1113831 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.337 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1113831 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1113831' 00:29:29.597 killing process with pid 1113831 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1113831 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1113831 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.597 16:42:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:32.134 00:29:32.134 real 0m9.353s 00:29:32.134 user 0m3.012s 00:29:32.134 sys 0m4.755s 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.134 ************************************ 00:29:32.134 END TEST nvmf_async_init 00:29:32.134 ************************************ 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.134 ************************************ 00:29:32.134 START TEST dma 00:29:32.134 ************************************ 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:32.134 * Looking for test storage... 00:29:32.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:32.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.134 --rc genhtml_branch_coverage=1 00:29:32.134 --rc genhtml_function_coverage=1 00:29:32.134 --rc genhtml_legend=1 00:29:32.134 --rc geninfo_all_blocks=1 00:29:32.134 --rc geninfo_unexecuted_blocks=1 00:29:32.134 00:29:32.134 ' 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:32.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.134 --rc genhtml_branch_coverage=1 00:29:32.134 --rc genhtml_function_coverage=1 00:29:32.134 --rc genhtml_legend=1 00:29:32.134 --rc geninfo_all_blocks=1 00:29:32.134 --rc geninfo_unexecuted_blocks=1 00:29:32.134 00:29:32.134 ' 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:32.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.134 --rc genhtml_branch_coverage=1 00:29:32.134 --rc genhtml_function_coverage=1 00:29:32.134 --rc genhtml_legend=1 00:29:32.134 --rc geninfo_all_blocks=1 00:29:32.134 --rc geninfo_unexecuted_blocks=1 00:29:32.134 00:29:32.134 ' 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:32.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.134 --rc genhtml_branch_coverage=1 00:29:32.134 --rc genhtml_function_coverage=1 00:29:32.134 --rc genhtml_legend=1 00:29:32.134 --rc geninfo_all_blocks=1 00:29:32.134 --rc geninfo_unexecuted_blocks=1 00:29:32.134 00:29:32.134 ' 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.134 16:43:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:32.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:32.135 00:29:32.135 real 0m0.206s 00:29:32.135 user 0m0.121s 00:29:32.135 sys 0m0.098s 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:32.135 ************************************ 00:29:32.135 END TEST dma 00:29:32.135 ************************************ 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.135 16:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.135 ************************************ 00:29:32.135 START TEST nvmf_identify 00:29:32.135 ************************************ 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:32.135 * Looking for test storage... 00:29:32.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:32.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.135 --rc genhtml_branch_coverage=1 00:29:32.135 --rc genhtml_function_coverage=1 00:29:32.135 --rc genhtml_legend=1 00:29:32.135 --rc geninfo_all_blocks=1 00:29:32.135 --rc geninfo_unexecuted_blocks=1 00:29:32.135 00:29:32.135 ' 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:32.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.135 --rc genhtml_branch_coverage=1 00:29:32.135 --rc genhtml_function_coverage=1 00:29:32.135 --rc genhtml_legend=1 00:29:32.135 --rc geninfo_all_blocks=1 00:29:32.135 --rc geninfo_unexecuted_blocks=1 00:29:32.135 00:29:32.135 ' 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:32.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.135 --rc genhtml_branch_coverage=1 00:29:32.135 --rc genhtml_function_coverage=1 00:29:32.135 --rc genhtml_legend=1 00:29:32.135 --rc geninfo_all_blocks=1 00:29:32.135 --rc geninfo_unexecuted_blocks=1 00:29:32.135 00:29:32.135 ' 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:32.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.135 --rc genhtml_branch_coverage=1 00:29:32.135 --rc genhtml_function_coverage=1 00:29:32.135 --rc genhtml_legend=1 00:29:32.135 --rc geninfo_all_blocks=1 00:29:32.135 --rc geninfo_unexecuted_blocks=1 00:29:32.135 00:29:32.135 ' 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.135 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:32.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:32.394 16:43:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:38.970 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.970 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:38.970 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:38.971 Found net devices under 0000:af:00.0: cvl_0_0 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:38.971 Found net devices under 0000:af:00.1: cvl_0_1 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:38.971 16:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:38.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:29:38.971 00:29:38.971 --- 10.0.0.2 ping statistics --- 00:29:38.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.971 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:29:38.971 00:29:38.971 --- 10.0.0.1 ping statistics --- 00:29:38.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.971 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1117390 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1117390 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1117390 ']' 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.971 [2024-12-14 16:43:08.123935] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:38.971 [2024-12-14 16:43:08.123977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.971 [2024-12-14 16:43:08.203580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.971 [2024-12-14 16:43:08.227665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.971 [2024-12-14 16:43:08.227703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.971 [2024-12-14 16:43:08.227711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.971 [2024-12-14 16:43:08.227717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.971 [2024-12-14 16:43:08.227722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.971 [2024-12-14 16:43:08.229040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.971 [2024-12-14 16:43:08.229148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.971 [2024-12-14 16:43:08.229236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.971 [2024-12-14 16:43:08.229237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.971 [2024-12-14 16:43:08.321563] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.971 Malloc0 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.971 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.972 [2024-12-14 16:43:08.423270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.972 [ 00:29:38.972 { 00:29:38.972 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:38.972 "subtype": "Discovery", 00:29:38.972 "listen_addresses": [ 00:29:38.972 { 00:29:38.972 "trtype": "TCP", 00:29:38.972 "adrfam": "IPv4", 00:29:38.972 "traddr": "10.0.0.2", 00:29:38.972 "trsvcid": "4420" 00:29:38.972 } 00:29:38.972 ], 00:29:38.972 "allow_any_host": true, 00:29:38.972 "hosts": [] 00:29:38.972 }, 00:29:38.972 { 00:29:38.972 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.972 "subtype": "NVMe", 00:29:38.972 "listen_addresses": [ 00:29:38.972 { 00:29:38.972 "trtype": "TCP", 00:29:38.972 "adrfam": "IPv4", 00:29:38.972 "traddr": "10.0.0.2", 00:29:38.972 "trsvcid": "4420" 00:29:38.972 } 00:29:38.972 ], 00:29:38.972 "allow_any_host": true, 00:29:38.972 "hosts": [], 00:29:38.972 "serial_number": "SPDK00000000000001", 00:29:38.972 "model_number": "SPDK bdev Controller", 00:29:38.972 "max_namespaces": 32, 00:29:38.972 "min_cntlid": 1, 00:29:38.972 "max_cntlid": 65519, 00:29:38.972 "namespaces": [ 00:29:38.972 { 00:29:38.972 "nsid": 1, 00:29:38.972 "bdev_name": "Malloc0", 00:29:38.972 "name": "Malloc0", 00:29:38.972 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:38.972 "eui64": "ABCDEF0123456789", 00:29:38.972 "uuid": "cc97af6e-83c0-440a-a148-b3860ea293c3" 00:29:38.972 } 00:29:38.972 ] 00:29:38.972 } 00:29:38.972 ] 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.972 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:38.972 [2024-12-14 16:43:08.478129] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:38.972 [2024-12-14 16:43:08.478162] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117608 ] 00:29:38.972 [2024-12-14 16:43:08.519780] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:38.972 [2024-12-14 16:43:08.519827] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:38.972 [2024-12-14 16:43:08.519832] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:38.972 [2024-12-14 16:43:08.519843] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:38.972 [2024-12-14 16:43:08.519852] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:38.972 [2024-12-14 16:43:08.520378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:38.972 [2024-12-14 16:43:08.520410] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x95ded0 0 00:29:38.972 [2024-12-14 16:43:08.530571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:38.972 [2024-12-14 16:43:08.530586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:38.972 [2024-12-14 16:43:08.530591] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:38.972 [2024-12-14 16:43:08.530594] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:38.972 [2024-12-14 16:43:08.530625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.530631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.530635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95ded0) 00:29:38.972 [2024-12-14 16:43:08.530649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:38.972 [2024-12-14 16:43:08.530666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9540, cid 0, qid 0 00:29:38.972 [2024-12-14 16:43:08.538566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.972 [2024-12-14 16:43:08.538575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.972 [2024-12-14 16:43:08.538579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.538583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9540) on tqpair=0x95ded0 00:29:38.972 [2024-12-14 16:43:08.538595] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:38.972 [2024-12-14 16:43:08.538601] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:38.972 [2024-12-14 16:43:08.538607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:38.972 [2024-12-14 16:43:08.538619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.538623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.538626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95ded0) 00:29:38.972 [2024-12-14 16:43:08.538634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.972 [2024-12-14 16:43:08.538648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9540, cid 0, qid 0 00:29:38.972 [2024-12-14 16:43:08.538805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.972 [2024-12-14 16:43:08.538810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.972 [2024-12-14 16:43:08.538813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.538817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9540) on tqpair=0x95ded0 00:29:38.972 [2024-12-14 16:43:08.538822] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:38.972 [2024-12-14 16:43:08.538828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:38.972 [2024-12-14 16:43:08.538835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.538838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.538844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95ded0) 00:29:38.972 [2024-12-14 16:43:08.538850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.972 [2024-12-14 16:43:08.538860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9540, cid 0, qid 0 00:29:38.972 [2024-12-14 16:43:08.538952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.972 [2024-12-14 16:43:08.538958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.972 [2024-12-14 16:43:08.538961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.538964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9540) on tqpair=0x95ded0 00:29:38.972 [2024-12-14 16:43:08.538969] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:38.972 [2024-12-14 16:43:08.538976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:38.972 [2024-12-14 16:43:08.538981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.538984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.538988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95ded0) 00:29:38.972 [2024-12-14 16:43:08.538993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.972 [2024-12-14 16:43:08.539003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9540, cid 0, qid 0 00:29:38.972 [2024-12-14 16:43:08.539101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.972 [2024-12-14 16:43:08.539107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.972 [2024-12-14 16:43:08.539110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.539113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9540) on tqpair=0x95ded0 00:29:38.972 [2024-12-14 16:43:08.539117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:38.972 [2024-12-14 16:43:08.539126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.539130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.972 [2024-12-14 16:43:08.539133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95ded0) 00:29:38.973 [2024-12-14 16:43:08.539139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.973 [2024-12-14 16:43:08.539147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9540, cid 0, qid 0 00:29:38.973 [2024-12-14 16:43:08.539254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.973 [2024-12-14 16:43:08.539260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.973 [2024-12-14 16:43:08.539263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9540) on tqpair=0x95ded0 00:29:38.973 [2024-12-14 16:43:08.539270] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:38.973 [2024-12-14 16:43:08.539275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:38.973 [2024-12-14 16:43:08.539281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:38.973 [2024-12-14 16:43:08.539388] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:38.973 [2024-12-14 16:43:08.539393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:38.973 [2024-12-14 16:43:08.539402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95ded0) 00:29:38.973 [2024-12-14 16:43:08.539414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.973 [2024-12-14 16:43:08.539424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9540, cid 0, qid 0 00:29:38.973 [2024-12-14 16:43:08.539485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.973 [2024-12-14 16:43:08.539490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.973 [2024-12-14 16:43:08.539493] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9540) on tqpair=0x95ded0 00:29:38.973 [2024-12-14 16:43:08.539501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:38.973 [2024-12-14 16:43:08.539509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95ded0) 00:29:38.973 [2024-12-14 16:43:08.539521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.973 [2024-12-14 16:43:08.539530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9540, cid 0, qid 0 00:29:38.973 [2024-12-14 16:43:08.539639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.973 [2024-12-14 16:43:08.539645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.973 [2024-12-14 16:43:08.539648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9540) on tqpair=0x95ded0 00:29:38.973 [2024-12-14 16:43:08.539656] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:38.973 [2024-12-14 16:43:08.539660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:38.973 [2024-12-14 16:43:08.539667] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:38.973 [2024-12-14 16:43:08.539674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:38.973 [2024-12-14 16:43:08.539681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95ded0) 00:29:38.973 [2024-12-14 16:43:08.539690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.973 [2024-12-14 16:43:08.539700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9540, cid 0, qid 0 00:29:38.973 [2024-12-14 16:43:08.539794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.973 [2024-12-14 16:43:08.539800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.973 [2024-12-14 16:43:08.539804] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539807] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95ded0): datao=0, datal=4096, cccid=0 00:29:38.973 [2024-12-14 16:43:08.539811] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c9540) on tqpair(0x95ded0): expected_datao=0, payload_size=4096 00:29:38.973 [2024-12-14 16:43:08.539817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539824] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539828] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.973 [2024-12-14 16:43:08.539846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.973 [2024-12-14 16:43:08.539849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9540) on tqpair=0x95ded0 00:29:38.973 [2024-12-14 16:43:08.539859] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:38.973 [2024-12-14 16:43:08.539864] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:38.973 [2024-12-14 16:43:08.539867] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:38.973 [2024-12-14 16:43:08.539872] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:38.973 [2024-12-14 16:43:08.539876] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:38.973 [2024-12-14 16:43:08.539880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:38.973 [2024-12-14 16:43:08.539892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:38.973 [2024-12-14 16:43:08.539900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.539907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95ded0) 00:29:38.973 [2024-12-14 16:43:08.539912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:38.973 [2024-12-14 16:43:08.539922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9540, cid 0, qid 0 00:29:38.973 [2024-12-14 16:43:08.539993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.973 [2024-12-14 16:43:08.539998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.973 [2024-12-14 16:43:08.540001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9540) on tqpair=0x95ded0 00:29:38.973 [2024-12-14 16:43:08.540011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95ded0) 00:29:38.973 [2024-12-14 16:43:08.540022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.973 [2024-12-14 16:43:08.540028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x95ded0) 00:29:38.973 [2024-12-14 16:43:08.540039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.973 [2024-12-14 16:43:08.540044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x95ded0) 00:29:38.973 [2024-12-14 16:43:08.540057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.973 [2024-12-14 16:43:08.540063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.973 [2024-12-14 16:43:08.540074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.973 [2024-12-14 16:43:08.540078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:38.973 [2024-12-14 16:43:08.540088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:38.973 [2024-12-14 16:43:08.540093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95ded0) 00:29:38.973 [2024-12-14 16:43:08.540102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.973 [2024-12-14 16:43:08.540113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9540, cid 0, qid 0 00:29:38.973 [2024-12-14 16:43:08.540117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c96c0, cid 1, qid 0 00:29:38.973 [2024-12-14 16:43:08.540121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9840, cid 2, qid 0 00:29:38.973 [2024-12-14 16:43:08.540125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.973 [2024-12-14 16:43:08.540129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9b40, cid 4, qid 0 00:29:38.973 [2024-12-14 16:43:08.540244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.973 [2024-12-14 16:43:08.540250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.973 [2024-12-14 16:43:08.540253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9b40) on tqpair=0x95ded0 00:29:38.973 [2024-12-14 16:43:08.540261] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:38.973 [2024-12-14 16:43:08.540265] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:38.973 [2024-12-14 16:43:08.540274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.973 [2024-12-14 16:43:08.540278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95ded0) 00:29:38.974 [2024-12-14 16:43:08.540283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.974 [2024-12-14 16:43:08.540292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9b40, cid 4, qid 0 00:29:38.974 [2024-12-14 16:43:08.540357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.974 [2024-12-14 16:43:08.540363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.974 [2024-12-14 16:43:08.540366] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540369] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95ded0): datao=0, datal=4096, cccid=4 00:29:38.974 [2024-12-14 16:43:08.540373] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c9b40) on tqpair(0x95ded0): expected_datao=0, payload_size=4096 00:29:38.974 [2024-12-14 16:43:08.540377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540396] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540400] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.974 [2024-12-14 16:43:08.540451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.974 [2024-12-14 16:43:08.540454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9b40) on tqpair=0x95ded0 00:29:38.974 [2024-12-14 16:43:08.540468] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:38.974 [2024-12-14 16:43:08.540492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95ded0) 00:29:38.974 [2024-12-14 16:43:08.540501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.974 [2024-12-14 16:43:08.540507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x95ded0) 00:29:38.974 [2024-12-14 16:43:08.540518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.974 [2024-12-14 16:43:08.540531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9b40, cid 4, qid 0 00:29:38.974 [2024-12-14 16:43:08.540536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9cc0, cid 5, qid 0 00:29:38.974 [2024-12-14 16:43:08.540645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.974 [2024-12-14 16:43:08.540651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.974 [2024-12-14 16:43:08.540654] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540658] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95ded0): datao=0, datal=1024, cccid=4 00:29:38.974 [2024-12-14 16:43:08.540661] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c9b40) on tqpair(0x95ded0): expected_datao=0, payload_size=1024 00:29:38.974 [2024-12-14 16:43:08.540665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540671] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540674] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.974 [2024-12-14 16:43:08.540684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.974 [2024-12-14 16:43:08.540686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.540690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9cc0) on tqpair=0x95ded0 00:29:38.974 [2024-12-14 16:43:08.581731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.974 [2024-12-14 16:43:08.581744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.974 [2024-12-14 16:43:08.581748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.581751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9b40) on tqpair=0x95ded0 00:29:38.974 [2024-12-14 16:43:08.581764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.581768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95ded0) 00:29:38.974 [2024-12-14 16:43:08.581776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.974 [2024-12-14 16:43:08.581792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9b40, cid 4, qid 0 00:29:38.974 [2024-12-14 16:43:08.581869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.974 [2024-12-14 16:43:08.581875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.974 [2024-12-14 16:43:08.581880] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.581884] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95ded0): datao=0, datal=3072, cccid=4 00:29:38.974 [2024-12-14 16:43:08.581888] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c9b40) on tqpair(0x95ded0): expected_datao=0, payload_size=3072 00:29:38.974 [2024-12-14 16:43:08.581891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.581897] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.581901] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.581930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.974 [2024-12-14 16:43:08.581938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.974 [2024-12-14 16:43:08.581941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.581944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9b40) on tqpair=0x95ded0 00:29:38.974 [2024-12-14 16:43:08.581951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.581955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95ded0) 00:29:38.974 [2024-12-14 16:43:08.581960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.974 [2024-12-14 16:43:08.581974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c9b40, cid 4, qid 0 00:29:38.974 [2024-12-14 16:43:08.582099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.974 [2024-12-14 16:43:08.582105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.974 [2024-12-14 16:43:08.582107] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.582110] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95ded0): datao=0, datal=8, cccid=4 00:29:38.974 [2024-12-14 16:43:08.582114] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c9b40) on tqpair(0x95ded0): expected_datao=0, payload_size=8 00:29:38.974 [2024-12-14 16:43:08.582118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.582124] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.582127] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.626566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.974 [2024-12-14 16:43:08.626576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.974 [2024-12-14 16:43:08.626579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.974 [2024-12-14 16:43:08.626582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9b40) on tqpair=0x95ded0 00:29:38.974 ===================================================== 00:29:38.974 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:38.974 ===================================================== 00:29:38.974 Controller Capabilities/Features 00:29:38.974 ================================ 00:29:38.974 Vendor ID: 0000 00:29:38.974 Subsystem Vendor ID: 0000 00:29:38.974 Serial Number: .................... 00:29:38.974 Model Number: ........................................ 00:29:38.974 Firmware Version: 25.01 00:29:38.974 Recommended Arb Burst: 0 00:29:38.974 IEEE OUI Identifier: 00 00 00 00:29:38.974 Multi-path I/O 00:29:38.974 May have multiple subsystem ports: No 00:29:38.974 May have multiple controllers: No 00:29:38.974 Associated with SR-IOV VF: No 00:29:38.974 Max Data Transfer Size: 131072 00:29:38.974 Max Number of Namespaces: 0 00:29:38.974 Max Number of I/O Queues: 1024 00:29:38.974 NVMe Specification Version (VS): 1.3 00:29:38.974 NVMe Specification Version (Identify): 1.3 00:29:38.974 Maximum Queue Entries: 128 00:29:38.974 Contiguous Queues Required: Yes 00:29:38.974 Arbitration Mechanisms Supported 00:29:38.974 Weighted Round Robin: Not Supported 00:29:38.974 Vendor Specific: Not Supported 00:29:38.974 Reset Timeout: 15000 ms 00:29:38.974 Doorbell Stride: 4 bytes 00:29:38.974 NVM Subsystem Reset: Not Supported 00:29:38.974 Command Sets Supported 00:29:38.974 NVM Command Set: Supported 00:29:38.974 Boot Partition: Not Supported 00:29:38.974 Memory Page Size Minimum: 4096 bytes 00:29:38.974 Memory Page Size Maximum: 4096 bytes 00:29:38.974 Persistent Memory Region: Not Supported 00:29:38.974 Optional Asynchronous Events Supported 00:29:38.974 Namespace Attribute Notices: Not Supported 00:29:38.974 Firmware Activation Notices: Not Supported 00:29:38.974 ANA Change Notices: Not Supported 00:29:38.974 PLE Aggregate Log Change Notices: Not Supported 00:29:38.974 LBA Status Info Alert Notices: Not Supported 00:29:38.974 EGE Aggregate Log Change Notices: Not Supported 00:29:38.974 Normal NVM Subsystem Shutdown event: Not Supported 00:29:38.974 Zone Descriptor Change Notices: Not Supported 00:29:38.974 Discovery Log Change Notices: Supported 00:29:38.974 Controller Attributes 00:29:38.974 128-bit Host Identifier: Not Supported 00:29:38.974 Non-Operational Permissive Mode: Not Supported 00:29:38.974 NVM Sets: Not Supported 00:29:38.974 Read Recovery Levels: Not Supported 00:29:38.974 Endurance Groups: Not Supported 00:29:38.974 Predictable Latency Mode: Not Supported 00:29:38.974 Traffic Based Keep ALive: Not Supported 00:29:38.974 Namespace Granularity: Not Supported 00:29:38.974 SQ Associations: Not Supported 00:29:38.975 UUID List: Not Supported 00:29:38.975 Multi-Domain Subsystem: Not Supported 00:29:38.975 Fixed Capacity Management: Not Supported 00:29:38.975 Variable Capacity Management: Not Supported 00:29:38.975 Delete Endurance Group: Not Supported 00:29:38.975 Delete NVM Set: Not Supported 00:29:38.975 Extended LBA Formats Supported: Not Supported 00:29:38.975 Flexible Data Placement Supported: Not Supported 00:29:38.975 00:29:38.975 Controller Memory Buffer Support 00:29:38.975 ================================ 00:29:38.975 Supported: No 00:29:38.975 00:29:38.975 Persistent Memory Region Support 00:29:38.975 ================================ 00:29:38.975 Supported: No 00:29:38.975 00:29:38.975 Admin Command Set Attributes 00:29:38.975 ============================ 00:29:38.975 Security Send/Receive: Not Supported 00:29:38.975 Format NVM: Not Supported 00:29:38.975 Firmware Activate/Download: Not Supported 00:29:38.975 Namespace Management: Not Supported 00:29:38.975 Device Self-Test: Not Supported 00:29:38.975 Directives: Not Supported 00:29:38.975 NVMe-MI: Not Supported 00:29:38.975 Virtualization Management: Not Supported 00:29:38.975 Doorbell Buffer Config: Not Supported 00:29:38.975 Get LBA Status Capability: Not Supported 00:29:38.975 Command & Feature Lockdown Capability: Not Supported 00:29:38.975 Abort Command Limit: 1 00:29:38.975 Async Event Request Limit: 4 00:29:38.975 Number of Firmware Slots: N/A 00:29:38.975 Firmware Slot 1 Read-Only: N/A 00:29:38.975 Firmware Activation Without Reset: N/A 00:29:38.975 Multiple Update Detection Support: N/A 00:29:38.975 Firmware Update Granularity: No Information Provided 00:29:38.975 Per-Namespace SMART Log: No 00:29:38.975 Asymmetric Namespace Access Log Page: Not Supported 00:29:38.975 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:38.975 Command Effects Log Page: Not Supported 00:29:38.975 Get Log Page Extended Data: Supported 00:29:38.975 Telemetry Log Pages: Not Supported 00:29:38.975 Persistent Event Log Pages: Not Supported 00:29:38.975 Supported Log Pages Log Page: May Support 00:29:38.975 Commands Supported & Effects Log Page: Not Supported 00:29:38.975 Feature Identifiers & Effects Log Page:May Support 00:29:38.975 NVMe-MI Commands & Effects Log Page: May Support 00:29:38.975 Data Area 4 for Telemetry Log: Not Supported 00:29:38.975 Error Log Page Entries Supported: 128 00:29:38.975 Keep Alive: Not Supported 00:29:38.975 00:29:38.975 NVM Command Set Attributes 00:29:38.975 ========================== 00:29:38.975 Submission Queue Entry Size 00:29:38.975 Max: 1 00:29:38.975 Min: 1 00:29:38.975 Completion Queue Entry Size 00:29:38.975 Max: 1 00:29:38.975 Min: 1 00:29:38.975 Number of Namespaces: 0 00:29:38.975 Compare Command: Not Supported 00:29:38.975 Write Uncorrectable Command: Not Supported 00:29:38.975 Dataset Management Command: Not Supported 00:29:38.975 Write Zeroes Command: Not Supported 00:29:38.975 Set Features Save Field: Not Supported 00:29:38.975 Reservations: Not Supported 00:29:38.975 Timestamp: Not Supported 00:29:38.975 Copy: Not Supported 00:29:38.975 Volatile Write Cache: Not Present 00:29:38.975 Atomic Write Unit (Normal): 1 00:29:38.975 Atomic Write Unit (PFail): 1 00:29:38.975 Atomic Compare & Write Unit: 1 00:29:38.975 Fused Compare & Write: Supported 00:29:38.975 Scatter-Gather List 00:29:38.975 SGL Command Set: Supported 00:29:38.975 SGL Keyed: Supported 00:29:38.975 SGL Bit Bucket Descriptor: Not Supported 00:29:38.975 SGL Metadata Pointer: Not Supported 00:29:38.975 Oversized SGL: Not Supported 00:29:38.975 SGL Metadata Address: Not Supported 00:29:38.975 SGL Offset: Supported 00:29:38.975 Transport SGL Data Block: Not Supported 00:29:38.975 Replay Protected Memory Block: Not Supported 00:29:38.975 00:29:38.975 Firmware Slot Information 00:29:38.975 ========================= 00:29:38.975 Active slot: 0 00:29:38.975 00:29:38.975 00:29:38.975 Error Log 00:29:38.975 ========= 00:29:38.975 00:29:38.975 Active Namespaces 00:29:38.975 ================= 00:29:38.975 Discovery Log Page 00:29:38.975 ================== 00:29:38.975 Generation Counter: 2 00:29:38.975 Number of Records: 2 00:29:38.975 Record Format: 0 00:29:38.975 00:29:38.975 Discovery Log Entry 0 00:29:38.975 ---------------------- 00:29:38.975 Transport Type: 3 (TCP) 00:29:38.975 Address Family: 1 (IPv4) 00:29:38.975 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:38.975 Entry Flags: 00:29:38.975 Duplicate Returned Information: 1 00:29:38.975 Explicit Persistent Connection Support for Discovery: 1 00:29:38.975 Transport Requirements: 00:29:38.975 Secure Channel: Not Required 00:29:38.975 Port ID: 0 (0x0000) 00:29:38.975 Controller ID: 65535 (0xffff) 00:29:38.975 Admin Max SQ Size: 128 00:29:38.975 Transport Service Identifier: 4420 00:29:38.975 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:38.975 Transport Address: 10.0.0.2 00:29:38.975 Discovery Log Entry 1 00:29:38.975 ---------------------- 00:29:38.975 Transport Type: 3 (TCP) 00:29:38.975 Address Family: 1 (IPv4) 00:29:38.975 Subsystem Type: 2 (NVM Subsystem) 00:29:38.975 Entry Flags: 00:29:38.975 Duplicate Returned Information: 0 00:29:38.975 Explicit Persistent Connection Support for Discovery: 0 00:29:38.975 Transport Requirements: 00:29:38.975 Secure Channel: Not Required 00:29:38.975 Port ID: 0 (0x0000) 00:29:38.975 Controller ID: 65535 (0xffff) 00:29:38.975 Admin Max SQ Size: 128 00:29:38.975 Transport Service Identifier: 4420 00:29:38.975 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:38.975 Transport Address: 10.0.0.2 [2024-12-14 16:43:08.626665] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:38.975 [2024-12-14 16:43:08.626676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9540) on tqpair=0x95ded0 00:29:38.975 [2024-12-14 16:43:08.626683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.975 [2024-12-14 16:43:08.626688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c96c0) on tqpair=0x95ded0 00:29:38.975 [2024-12-14 16:43:08.626692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.975 [2024-12-14 16:43:08.626696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c9840) on tqpair=0x95ded0 00:29:38.975 [2024-12-14 16:43:08.626700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.975 [2024-12-14 16:43:08.626704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.975 [2024-12-14 16:43:08.626708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.975 [2024-12-14 16:43:08.626717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.975 [2024-12-14 16:43:08.626721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.975 [2024-12-14 16:43:08.626724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.975 [2024-12-14 16:43:08.626731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.975 [2024-12-14 16:43:08.626745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.975 [2024-12-14 16:43:08.626809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.975 [2024-12-14 16:43:08.626815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.975 [2024-12-14 16:43:08.626818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.975 [2024-12-14 16:43:08.626821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.975 [2024-12-14 16:43:08.626827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.975 [2024-12-14 16:43:08.626831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.626834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.626839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.626851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.976 [2024-12-14 16:43:08.626962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.976 [2024-12-14 16:43:08.626967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.976 [2024-12-14 16:43:08.626970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.626974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.976 [2024-12-14 16:43:08.626978] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:38.976 [2024-12-14 16:43:08.626982] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:38.976 [2024-12-14 16:43:08.626990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.626993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.626996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.627002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.627011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.976 [2024-12-14 16:43:08.627113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.976 [2024-12-14 16:43:08.627119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.976 [2024-12-14 16:43:08.627122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.976 [2024-12-14 16:43:08.627134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.627146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.627155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.976 [2024-12-14 16:43:08.627261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.976 [2024-12-14 16:43:08.627267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.976 [2024-12-14 16:43:08.627272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.976 [2024-12-14 16:43:08.627284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.627296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.627305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.976 [2024-12-14 16:43:08.627365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.976 [2024-12-14 16:43:08.627370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.976 [2024-12-14 16:43:08.627373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.976 [2024-12-14 16:43:08.627385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.627397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.627406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.976 [2024-12-14 16:43:08.627515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.976 [2024-12-14 16:43:08.627520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.976 [2024-12-14 16:43:08.627523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.976 [2024-12-14 16:43:08.627534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.627546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.627561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.976 [2024-12-14 16:43:08.627668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.976 [2024-12-14 16:43:08.627673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.976 [2024-12-14 16:43:08.627676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.976 [2024-12-14 16:43:08.627687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.627699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.627709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.976 [2024-12-14 16:43:08.627818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.976 [2024-12-14 16:43:08.627824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.976 [2024-12-14 16:43:08.627826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.976 [2024-12-14 16:43:08.627839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.627851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.627860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.976 [2024-12-14 16:43:08.627921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.976 [2024-12-14 16:43:08.627927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.976 [2024-12-14 16:43:08.627930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.976 [2024-12-14 16:43:08.627941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.627948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.627953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.627962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.976 [2024-12-14 16:43:08.628069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.976 [2024-12-14 16:43:08.628075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.976 [2024-12-14 16:43:08.628078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.628081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.976 [2024-12-14 16:43:08.628089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.628092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.628096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.628101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.628110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.976 [2024-12-14 16:43:08.628171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.976 [2024-12-14 16:43:08.628177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.976 [2024-12-14 16:43:08.628180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.628183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.976 [2024-12-14 16:43:08.628191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.628194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.628197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.628203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.628212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.976 [2024-12-14 16:43:08.628270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.976 [2024-12-14 16:43:08.628275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.976 [2024-12-14 16:43:08.628278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.628281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.976 [2024-12-14 16:43:08.628292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.628295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.976 [2024-12-14 16:43:08.628298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.976 [2024-12-14 16:43:08.628304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.976 [2024-12-14 16:43:08.628313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.628389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.628395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.628397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.628409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.628421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.628431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.628523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.628529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.628532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.628543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.628558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.628568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.628675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.628681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.628684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.628695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.628707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.628716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.628825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.628831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.628834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.628844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.628858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.628868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.628934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.628940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.628943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.628955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.628961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.628967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.628976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.629076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.629081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.629084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.629096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.629108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.629117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.629230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.629236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.629239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.629250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.629262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.629271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.629380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.629386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.629389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.629400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.629414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.629424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.629487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.629493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.629495] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.629508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.629520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.629529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.629592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.629598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.629601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.629612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.629624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.629633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.629709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.629714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.629717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.629729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.629741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.629750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.629824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.629829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.629832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.977 [2024-12-14 16:43:08.629844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.977 [2024-12-14 16:43:08.629850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.977 [2024-12-14 16:43:08.629856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.977 [2024-12-14 16:43:08.629867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.977 [2024-12-14 16:43:08.629933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.977 [2024-12-14 16:43:08.629938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.977 [2024-12-14 16:43:08.629941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.629945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.978 [2024-12-14 16:43:08.629954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.629957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.629960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.978 [2024-12-14 16:43:08.629965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.978 [2024-12-14 16:43:08.629975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.978 [2024-12-14 16:43:08.630038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.978 [2024-12-14 16:43:08.630043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.978 [2024-12-14 16:43:08.630046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.978 [2024-12-14 16:43:08.630057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.978 [2024-12-14 16:43:08.630069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.978 [2024-12-14 16:43:08.630079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.978 [2024-12-14 16:43:08.630155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.978 [2024-12-14 16:43:08.630161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.978 [2024-12-14 16:43:08.630164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.978 [2024-12-14 16:43:08.630175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.978 [2024-12-14 16:43:08.630187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.978 [2024-12-14 16:43:08.630197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.978 [2024-12-14 16:43:08.630272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.978 [2024-12-14 16:43:08.630278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.978 [2024-12-14 16:43:08.630281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.978 [2024-12-14 16:43:08.630292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.978 [2024-12-14 16:43:08.630304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.978 [2024-12-14 16:43:08.630314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.978 [2024-12-14 16:43:08.630378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.978 [2024-12-14 16:43:08.630383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.978 [2024-12-14 16:43:08.630386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.978 [2024-12-14 16:43:08.630398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.978 [2024-12-14 16:43:08.630410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.978 [2024-12-14 16:43:08.630419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.978 [2024-12-14 16:43:08.630481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.978 [2024-12-14 16:43:08.630486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.978 [2024-12-14 16:43:08.630489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.978 [2024-12-14 16:43:08.630501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.630507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.978 [2024-12-14 16:43:08.630513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.978 [2024-12-14 16:43:08.630522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.978 [2024-12-14 16:43:08.634563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.978 [2024-12-14 16:43:08.634571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.978 [2024-12-14 16:43:08.634574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.634578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.978 [2024-12-14 16:43:08.634588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.634592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.634595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95ded0) 00:29:38.978 [2024-12-14 16:43:08.634602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.978 [2024-12-14 16:43:08.634614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c99c0, cid 3, qid 0 00:29:38.978 [2024-12-14 16:43:08.634767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.978 [2024-12-14 16:43:08.634772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.978 [2024-12-14 16:43:08.634775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.634778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c99c0) on tqpair=0x95ded0 00:29:38.978 [2024-12-14 16:43:08.634785] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:29:38.978 00:29:38.978 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:38.978 [2024-12-14 16:43:08.670532] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:38.978 [2024-12-14 16:43:08.670574] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117610 ] 00:29:38.978 [2024-12-14 16:43:08.708725] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:38.978 [2024-12-14 16:43:08.708764] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:38.978 [2024-12-14 16:43:08.708769] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:38.978 [2024-12-14 16:43:08.708779] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:38.978 [2024-12-14 16:43:08.708787] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:38.978 [2024-12-14 16:43:08.712692] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:38.978 [2024-12-14 16:43:08.712718] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22feed0 0 00:29:38.978 [2024-12-14 16:43:08.720569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:38.978 [2024-12-14 16:43:08.720582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:38.978 [2024-12-14 16:43:08.720586] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:38.978 [2024-12-14 16:43:08.720589] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:38.978 [2024-12-14 16:43:08.720612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.720617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.720621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22feed0) 00:29:38.978 [2024-12-14 16:43:08.720631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:38.978 [2024-12-14 16:43:08.720647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a540, cid 0, qid 0 00:29:38.978 [2024-12-14 16:43:08.728566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.978 [2024-12-14 16:43:08.728574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.978 [2024-12-14 16:43:08.728577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.728581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a540) on tqpair=0x22feed0 00:29:38.978 [2024-12-14 16:43:08.728590] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:38.978 [2024-12-14 16:43:08.728595] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:38.978 [2024-12-14 16:43:08.728600] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:38.978 [2024-12-14 16:43:08.728609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.728613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.728616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22feed0) 00:29:38.978 [2024-12-14 16:43:08.728623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.978 [2024-12-14 16:43:08.728635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a540, cid 0, qid 0 00:29:38.978 [2024-12-14 16:43:08.728791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.978 [2024-12-14 16:43:08.728797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.978 [2024-12-14 16:43:08.728800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.978 [2024-12-14 16:43:08.728804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a540) on tqpair=0x22feed0 00:29:38.979 [2024-12-14 16:43:08.728810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:38.979 [2024-12-14 16:43:08.728817] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:38.979 [2024-12-14 16:43:08.728823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.728826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.728830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22feed0) 00:29:38.979 [2024-12-14 16:43:08.728835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.979 [2024-12-14 16:43:08.728845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a540, cid 0, qid 0 00:29:38.979 [2024-12-14 16:43:08.728912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.979 [2024-12-14 16:43:08.728918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.979 [2024-12-14 16:43:08.728921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.728924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a540) on tqpair=0x22feed0 00:29:38.979 [2024-12-14 16:43:08.728928] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:38.979 [2024-12-14 16:43:08.728935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:38.979 [2024-12-14 16:43:08.728941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.728945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.728948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22feed0) 00:29:38.979 [2024-12-14 16:43:08.728953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.979 [2024-12-14 16:43:08.728963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a540, cid 0, qid 0 00:29:38.979 [2024-12-14 16:43:08.729030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.979 [2024-12-14 16:43:08.729036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.979 [2024-12-14 16:43:08.729039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a540) on tqpair=0x22feed0 00:29:38.979 [2024-12-14 16:43:08.729046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:38.979 [2024-12-14 16:43:08.729054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22feed0) 00:29:38.979 [2024-12-14 16:43:08.729067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.979 [2024-12-14 16:43:08.729076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a540, cid 0, qid 0 00:29:38.979 [2024-12-14 16:43:08.729147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.979 [2024-12-14 16:43:08.729153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.979 [2024-12-14 16:43:08.729156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a540) on tqpair=0x22feed0 00:29:38.979 [2024-12-14 16:43:08.729163] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:38.979 [2024-12-14 16:43:08.729168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:38.979 [2024-12-14 16:43:08.729176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:38.979 [2024-12-14 16:43:08.729284] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:38.979 [2024-12-14 16:43:08.729288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:38.979 [2024-12-14 16:43:08.729294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22feed0) 00:29:38.979 [2024-12-14 16:43:08.729307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.979 [2024-12-14 16:43:08.729317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a540, cid 0, qid 0 00:29:38.979 [2024-12-14 16:43:08.729378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.979 [2024-12-14 16:43:08.729384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.979 [2024-12-14 16:43:08.729387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a540) on tqpair=0x22feed0 00:29:38.979 [2024-12-14 16:43:08.729395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:38.979 [2024-12-14 16:43:08.729403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22feed0) 00:29:38.979 [2024-12-14 16:43:08.729415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.979 [2024-12-14 16:43:08.729425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a540, cid 0, qid 0 00:29:38.979 [2024-12-14 16:43:08.729496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.979 [2024-12-14 16:43:08.729502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.979 [2024-12-14 16:43:08.729505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a540) on tqpair=0x22feed0 00:29:38.979 [2024-12-14 16:43:08.729512] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:38.979 [2024-12-14 16:43:08.729517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:38.979 [2024-12-14 16:43:08.729523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:38.979 [2024-12-14 16:43:08.729531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:38.979 [2024-12-14 16:43:08.729537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22feed0) 00:29:38.979 [2024-12-14 16:43:08.729547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.979 [2024-12-14 16:43:08.729560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a540, cid 0, qid 0 00:29:38.979 [2024-12-14 16:43:08.729648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.979 [2024-12-14 16:43:08.729656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.979 [2024-12-14 16:43:08.729660] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729663] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22feed0): datao=0, datal=4096, cccid=0 00:29:38.979 [2024-12-14 16:43:08.729667] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236a540) on tqpair(0x22feed0): expected_datao=0, payload_size=4096 00:29:38.979 [2024-12-14 16:43:08.729671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729677] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729681] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.979 [2024-12-14 16:43:08.729698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.979 [2024-12-14 16:43:08.729701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a540) on tqpair=0x22feed0 00:29:38.979 [2024-12-14 16:43:08.729711] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:38.979 [2024-12-14 16:43:08.729715] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:38.979 [2024-12-14 16:43:08.729719] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:38.979 [2024-12-14 16:43:08.729722] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:38.979 [2024-12-14 16:43:08.729726] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:38.979 [2024-12-14 16:43:08.729730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:38.979 [2024-12-14 16:43:08.729741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:38.979 [2024-12-14 16:43:08.729750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.979 [2024-12-14 16:43:08.729757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22feed0) 00:29:38.980 [2024-12-14 16:43:08.729763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:38.980 [2024-12-14 16:43:08.729774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a540, cid 0, qid 0 00:29:38.980 [2024-12-14 16:43:08.729839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.980 [2024-12-14 16:43:08.729845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.980 [2024-12-14 16:43:08.729848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.729851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a540) on tqpair=0x22feed0 00:29:38.980 [2024-12-14 16:43:08.729857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.729860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.729863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22feed0) 00:29:38.980 [2024-12-14 16:43:08.729869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.980 [2024-12-14 16:43:08.729874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.729877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.729880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22feed0) 00:29:38.980 [2024-12-14 16:43:08.729885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.980 [2024-12-14 16:43:08.729892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.729895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.729898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22feed0) 00:29:38.980 [2024-12-14 16:43:08.729903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.980 [2024-12-14 16:43:08.729908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.729912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.729915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22feed0) 00:29:38.980 [2024-12-14 16:43:08.729920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.980 [2024-12-14 16:43:08.729924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:38.980 [2024-12-14 16:43:08.729934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:38.980 [2024-12-14 16:43:08.729940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.729944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22feed0) 00:29:38.980 [2024-12-14 16:43:08.729949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.980 [2024-12-14 16:43:08.729960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a540, cid 0, qid 0 00:29:38.980 [2024-12-14 16:43:08.729965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 1, qid 0 00:29:38.980 [2024-12-14 16:43:08.729969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a840, cid 2, qid 0 00:29:38.980 [2024-12-14 16:43:08.729973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a9c0, cid 3, qid 0 00:29:38.980 [2024-12-14 16:43:08.729977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236ab40, cid 4, qid 0 00:29:38.980 [2024-12-14 16:43:08.730071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.980 [2024-12-14 16:43:08.730077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.980 [2024-12-14 16:43:08.730080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236ab40) on tqpair=0x22feed0 00:29:38.980 [2024-12-14 16:43:08.730087] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:38.980 [2024-12-14 16:43:08.730092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:38.980 [2024-12-14 16:43:08.730101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:38.980 [2024-12-14 16:43:08.730108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:38.980 [2024-12-14 16:43:08.730113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22feed0) 00:29:38.980 [2024-12-14 16:43:08.730126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:38.980 [2024-12-14 16:43:08.730135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236ab40, cid 4, qid 0 00:29:38.980 [2024-12-14 16:43:08.730200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.980 [2024-12-14 16:43:08.730207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.980 [2024-12-14 16:43:08.730211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236ab40) on tqpair=0x22feed0 00:29:38.980 [2024-12-14 16:43:08.730263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:38.980 [2024-12-14 16:43:08.730272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:38.980 [2024-12-14 16:43:08.730278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22feed0) 00:29:38.980 [2024-12-14 16:43:08.730287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.980 [2024-12-14 16:43:08.730297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236ab40, cid 4, qid 0 00:29:38.980 [2024-12-14 16:43:08.730370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.980 [2024-12-14 16:43:08.730376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.980 [2024-12-14 16:43:08.730379] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730382] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22feed0): datao=0, datal=4096, cccid=4 00:29:38.980 [2024-12-14 16:43:08.730386] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236ab40) on tqpair(0x22feed0): expected_datao=0, payload_size=4096 00:29:38.980 [2024-12-14 16:43:08.730390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730396] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730399] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.980 [2024-12-14 16:43:08.730414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.980 [2024-12-14 16:43:08.730416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236ab40) on tqpair=0x22feed0 00:29:38.980 [2024-12-14 16:43:08.730429] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:38.980 [2024-12-14 16:43:08.730440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:38.980 [2024-12-14 16:43:08.730448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:38.980 [2024-12-14 16:43:08.730454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22feed0) 00:29:38.980 [2024-12-14 16:43:08.730463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.980 [2024-12-14 16:43:08.730473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236ab40, cid 4, qid 0 00:29:38.980 [2024-12-14 16:43:08.730565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.980 [2024-12-14 16:43:08.730571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.980 [2024-12-14 16:43:08.730574] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730577] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22feed0): datao=0, datal=4096, cccid=4 00:29:38.980 [2024-12-14 16:43:08.730581] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236ab40) on tqpair(0x22feed0): expected_datao=0, payload_size=4096 00:29:38.980 [2024-12-14 16:43:08.730586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730597] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730601] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.980 [2024-12-14 16:43:08.730638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.980 [2024-12-14 16:43:08.730641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236ab40) on tqpair=0x22feed0 00:29:38.980 [2024-12-14 16:43:08.730654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:38.980 [2024-12-14 16:43:08.730664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:38.980 [2024-12-14 16:43:08.730670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22feed0) 00:29:38.980 [2024-12-14 16:43:08.730680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.980 [2024-12-14 16:43:08.730690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236ab40, cid 4, qid 0 00:29:38.980 [2024-12-14 16:43:08.730768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.980 [2024-12-14 16:43:08.730774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.980 [2024-12-14 16:43:08.730777] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730780] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22feed0): datao=0, datal=4096, cccid=4 00:29:38.980 [2024-12-14 16:43:08.730784] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236ab40) on tqpair(0x22feed0): expected_datao=0, payload_size=4096 00:29:38.980 [2024-12-14 16:43:08.730788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730793] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730796] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.980 [2024-12-14 16:43:08.730805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.980 [2024-12-14 16:43:08.730811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.981 [2024-12-14 16:43:08.730814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.730817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236ab40) on tqpair=0x22feed0 00:29:38.981 [2024-12-14 16:43:08.730823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:38.981 [2024-12-14 16:43:08.730831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:38.981 [2024-12-14 16:43:08.730838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:38.981 [2024-12-14 16:43:08.730844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:38.981 [2024-12-14 16:43:08.730848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:38.981 [2024-12-14 16:43:08.730853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:38.981 [2024-12-14 16:43:08.730857] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:38.981 [2024-12-14 16:43:08.730861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:38.981 [2024-12-14 16:43:08.730868] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:38.981 [2024-12-14 16:43:08.730880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.730883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22feed0) 00:29:38.981 [2024-12-14 16:43:08.730889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.981 [2024-12-14 16:43:08.730895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.730898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.730901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22feed0) 00:29:38.981 [2024-12-14 16:43:08.730906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.981 [2024-12-14 16:43:08.730918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236ab40, cid 4, qid 0 00:29:38.981 [2024-12-14 16:43:08.730923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236acc0, cid 5, qid 0 00:29:38.981 [2024-12-14 16:43:08.731011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.981 [2024-12-14 16:43:08.731016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.981 [2024-12-14 16:43:08.731019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236ab40) on tqpair=0x22feed0 00:29:38.981 [2024-12-14 16:43:08.731029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.981 [2024-12-14 16:43:08.731033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.981 [2024-12-14 16:43:08.731036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236acc0) on tqpair=0x22feed0 00:29:38.981 [2024-12-14 16:43:08.731049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22feed0) 00:29:38.981 [2024-12-14 16:43:08.731058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.981 [2024-12-14 16:43:08.731067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236acc0, cid 5, qid 0 00:29:38.981 [2024-12-14 16:43:08.731136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.981 [2024-12-14 16:43:08.731142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.981 [2024-12-14 16:43:08.731145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236acc0) on tqpair=0x22feed0 00:29:38.981 [2024-12-14 16:43:08.731157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22feed0) 00:29:38.981 [2024-12-14 16:43:08.731166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.981 [2024-12-14 16:43:08.731175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236acc0, cid 5, qid 0 00:29:38.981 [2024-12-14 16:43:08.731233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.981 [2024-12-14 16:43:08.731239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.981 [2024-12-14 16:43:08.731242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236acc0) on tqpair=0x22feed0 00:29:38.981 [2024-12-14 16:43:08.731253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22feed0) 00:29:38.981 [2024-12-14 16:43:08.731264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.981 [2024-12-14 16:43:08.731274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236acc0, cid 5, qid 0 00:29:38.981 [2024-12-14 16:43:08.731332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.981 [2024-12-14 16:43:08.731338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.981 [2024-12-14 16:43:08.731341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236acc0) on tqpair=0x22feed0 00:29:38.981 [2024-12-14 16:43:08.731356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22feed0) 00:29:38.981 [2024-12-14 16:43:08.731366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.981 [2024-12-14 16:43:08.731372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22feed0) 00:29:38.981 [2024-12-14 16:43:08.731380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.981 [2024-12-14 16:43:08.731386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x22feed0) 00:29:38.981 [2024-12-14 16:43:08.731395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.981 [2024-12-14 16:43:08.731401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22feed0) 00:29:38.981 [2024-12-14 16:43:08.731409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.981 [2024-12-14 16:43:08.731419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236acc0, cid 5, qid 0 00:29:38.981 [2024-12-14 16:43:08.731424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236ab40, cid 4, qid 0 00:29:38.981 [2024-12-14 16:43:08.731428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236ae40, cid 6, qid 0 00:29:38.981 [2024-12-14 16:43:08.731432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236afc0, cid 7, qid 0 00:29:38.981 [2024-12-14 16:43:08.731573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.981 [2024-12-14 16:43:08.731580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.981 [2024-12-14 16:43:08.731583] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731586] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22feed0): datao=0, datal=8192, cccid=5 00:29:38.981 [2024-12-14 16:43:08.731590] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236acc0) on tqpair(0x22feed0): expected_datao=0, payload_size=8192 00:29:38.981 [2024-12-14 16:43:08.731594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731605] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731608] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.981 [2024-12-14 16:43:08.731621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.981 [2024-12-14 16:43:08.731628] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731631] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22feed0): datao=0, datal=512, cccid=4 00:29:38.981 [2024-12-14 16:43:08.731635] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236ab40) on tqpair(0x22feed0): expected_datao=0, payload_size=512 00:29:38.981 [2024-12-14 16:43:08.731639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731644] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731647] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.981 [2024-12-14 16:43:08.731657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.981 [2024-12-14 16:43:08.731660] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731663] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22feed0): datao=0, datal=512, cccid=6 00:29:38.981 [2024-12-14 16:43:08.731667] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236ae40) on tqpair(0x22feed0): expected_datao=0, payload_size=512 00:29:38.981 [2024-12-14 16:43:08.731670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731676] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731679] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:38.981 [2024-12-14 16:43:08.731688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:38.981 [2024-12-14 16:43:08.731692] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:38.981 [2024-12-14 16:43:08.731695] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22feed0): datao=0, datal=4096, cccid=7 00:29:38.981 [2024-12-14 16:43:08.731699] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236afc0) on tqpair(0x22feed0): expected_datao=0, payload_size=4096 00:29:38.981 [2024-12-14 16:43:08.731702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.982 [2024-12-14 16:43:08.731708] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:38.982 [2024-12-14 16:43:08.731711] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:38.982 [2024-12-14 16:43:08.731719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.982 [2024-12-14 16:43:08.731724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.982 [2024-12-14 16:43:08.731727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.982 [2024-12-14 16:43:08.731730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236acc0) on tqpair=0x22feed0 00:29:38.982 [2024-12-14 16:43:08.731739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.982 [2024-12-14 16:43:08.731744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.982 [2024-12-14 16:43:08.731747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.982 [2024-12-14 16:43:08.731751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236ab40) on tqpair=0x22feed0 00:29:38.982 [2024-12-14 16:43:08.731759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.982 [2024-12-14 16:43:08.731764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.982 [2024-12-14 16:43:08.731767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.982 [2024-12-14 16:43:08.731771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236ae40) on tqpair=0x22feed0 00:29:38.982 [2024-12-14 16:43:08.731776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.982 [2024-12-14 16:43:08.731781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.982 [2024-12-14 16:43:08.731784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.982 [2024-12-14 16:43:08.731788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236afc0) on tqpair=0x22feed0 00:29:38.982 ===================================================== 00:29:38.982 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.982 ===================================================== 00:29:38.982 Controller Capabilities/Features 00:29:38.982 ================================ 00:29:38.982 Vendor ID: 8086 00:29:38.982 Subsystem Vendor ID: 8086 00:29:38.982 Serial Number: SPDK00000000000001 00:29:38.982 Model Number: SPDK bdev Controller 00:29:38.982 Firmware Version: 25.01 00:29:38.982 Recommended Arb Burst: 6 00:29:38.982 IEEE OUI Identifier: e4 d2 5c 00:29:38.982 Multi-path I/O 00:29:38.982 May have multiple subsystem ports: Yes 00:29:38.982 May have multiple controllers: Yes 00:29:38.982 Associated with SR-IOV VF: No 00:29:38.982 Max Data Transfer Size: 131072 00:29:38.982 Max Number of Namespaces: 32 00:29:38.982 Max Number of I/O Queues: 127 00:29:38.982 NVMe Specification Version (VS): 1.3 00:29:38.982 NVMe Specification Version (Identify): 1.3 00:29:38.982 Maximum Queue Entries: 128 00:29:38.982 Contiguous Queues Required: Yes 00:29:38.982 Arbitration Mechanisms Supported 00:29:38.982 Weighted Round Robin: Not Supported 00:29:38.982 Vendor Specific: Not Supported 00:29:38.982 Reset Timeout: 15000 ms 00:29:38.982 Doorbell Stride: 4 bytes 00:29:38.982 NVM Subsystem Reset: Not Supported 00:29:38.982 Command Sets Supported 00:29:38.982 NVM Command Set: Supported 00:29:38.982 Boot Partition: Not Supported 00:29:38.982 Memory Page Size Minimum: 4096 bytes 00:29:38.982 Memory Page Size Maximum: 4096 bytes 00:29:38.982 Persistent Memory Region: Not Supported 00:29:38.982 Optional Asynchronous Events Supported 00:29:38.982 Namespace Attribute Notices: Supported 00:29:38.982 Firmware Activation Notices: Not Supported 00:29:38.982 ANA Change Notices: Not Supported 00:29:38.982 PLE Aggregate Log Change Notices: Not Supported 00:29:38.982 LBA Status Info Alert Notices: Not Supported 00:29:38.982 EGE Aggregate Log Change Notices: Not Supported 00:29:38.982 Normal NVM Subsystem Shutdown event: Not Supported 00:29:38.982 Zone Descriptor Change Notices: Not Supported 00:29:38.982 Discovery Log Change Notices: Not Supported 00:29:38.982 Controller Attributes 00:29:38.982 128-bit Host Identifier: Supported 00:29:38.982 Non-Operational Permissive Mode: Not Supported 00:29:38.982 NVM Sets: Not Supported 00:29:38.982 Read Recovery Levels: Not Supported 00:29:38.982 Endurance Groups: Not Supported 00:29:38.982 Predictable Latency Mode: Not Supported 00:29:38.982 Traffic Based Keep ALive: Not Supported 00:29:38.982 Namespace Granularity: Not Supported 00:29:38.982 SQ Associations: Not Supported 00:29:38.982 UUID List: Not Supported 00:29:38.982 Multi-Domain Subsystem: Not Supported 00:29:38.982 Fixed Capacity Management: Not Supported 00:29:38.982 Variable Capacity Management: Not Supported 00:29:38.982 Delete Endurance Group: Not Supported 00:29:38.982 Delete NVM Set: Not Supported 00:29:38.982 Extended LBA Formats Supported: Not Supported 00:29:38.982 Flexible Data Placement Supported: Not Supported 00:29:38.982 00:29:38.982 Controller Memory Buffer Support 00:29:38.982 ================================ 00:29:38.982 Supported: No 00:29:38.982 00:29:38.982 Persistent Memory Region Support 00:29:38.982 ================================ 00:29:38.982 Supported: No 00:29:38.982 00:29:38.982 Admin Command Set Attributes 00:29:38.982 ============================ 00:29:38.982 Security Send/Receive: Not Supported 00:29:38.982 Format NVM: Not Supported 00:29:38.982 Firmware Activate/Download: Not Supported 00:29:38.982 Namespace Management: Not Supported 00:29:38.982 Device Self-Test: Not Supported 00:29:38.982 Directives: Not Supported 00:29:38.982 NVMe-MI: Not Supported 00:29:38.982 Virtualization Management: Not Supported 00:29:38.982 Doorbell Buffer Config: Not Supported 00:29:38.982 Get LBA Status Capability: Not Supported 00:29:38.982 Command & Feature Lockdown Capability: Not Supported 00:29:38.982 Abort Command Limit: 4 00:29:38.982 Async Event Request Limit: 4 00:29:38.982 Number of Firmware Slots: N/A 00:29:38.982 Firmware Slot 1 Read-Only: N/A 00:29:38.982 Firmware Activation Without Reset: N/A 00:29:38.982 Multiple Update Detection Support: N/A 00:29:38.982 Firmware Update Granularity: No Information Provided 00:29:38.982 Per-Namespace SMART Log: No 00:29:38.982 Asymmetric Namespace Access Log Page: Not Supported 00:29:38.982 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:38.982 Command Effects Log Page: Supported 00:29:38.982 Get Log Page Extended Data: Supported 00:29:38.982 Telemetry Log Pages: Not Supported 00:29:38.982 Persistent Event Log Pages: Not Supported 00:29:38.982 Supported Log Pages Log Page: May Support 00:29:38.982 Commands Supported & Effects Log Page: Not Supported 00:29:38.982 Feature Identifiers & Effects Log Page:May Support 00:29:38.982 NVMe-MI Commands & Effects Log Page: May Support 00:29:38.982 Data Area 4 for Telemetry Log: Not Supported 00:29:38.982 Error Log Page Entries Supported: 128 00:29:38.982 Keep Alive: Supported 00:29:38.982 Keep Alive Granularity: 10000 ms 00:29:38.982 00:29:38.982 NVM Command Set Attributes 00:29:38.982 ========================== 00:29:38.982 Submission Queue Entry Size 00:29:38.982 Max: 64 00:29:38.982 Min: 64 00:29:38.982 Completion Queue Entry Size 00:29:38.982 Max: 16 00:29:38.982 Min: 16 00:29:38.982 Number of Namespaces: 32 00:29:38.982 Compare Command: Supported 00:29:38.982 Write Uncorrectable Command: Not Supported 00:29:38.982 Dataset Management Command: Supported 00:29:38.982 Write Zeroes Command: Supported 00:29:38.982 Set Features Save Field: Not Supported 00:29:38.982 Reservations: Supported 00:29:38.982 Timestamp: Not Supported 00:29:38.982 Copy: Supported 00:29:38.982 Volatile Write Cache: Present 00:29:38.982 Atomic Write Unit (Normal): 1 00:29:38.982 Atomic Write Unit (PFail): 1 00:29:38.982 Atomic Compare & Write Unit: 1 00:29:38.982 Fused Compare & Write: Supported 00:29:38.982 Scatter-Gather List 00:29:38.982 SGL Command Set: Supported 00:29:38.982 SGL Keyed: Supported 00:29:38.982 SGL Bit Bucket Descriptor: Not Supported 00:29:38.982 SGL Metadata Pointer: Not Supported 00:29:38.982 Oversized SGL: Not Supported 00:29:38.982 SGL Metadata Address: Not Supported 00:29:38.982 SGL Offset: Supported 00:29:38.982 Transport SGL Data Block: Not Supported 00:29:38.982 Replay Protected Memory Block: Not Supported 00:29:38.982 00:29:38.982 Firmware Slot Information 00:29:38.982 ========================= 00:29:38.982 Active slot: 1 00:29:38.982 Slot 1 Firmware Revision: 25.01 00:29:38.982 00:29:38.982 00:29:38.982 Commands Supported and Effects 00:29:38.982 ============================== 00:29:38.982 Admin Commands 00:29:38.982 -------------- 00:29:38.982 Get Log Page (02h): Supported 00:29:38.982 Identify (06h): Supported 00:29:38.982 Abort (08h): Supported 00:29:38.982 Set Features (09h): Supported 00:29:38.982 Get Features (0Ah): Supported 00:29:38.982 Asynchronous Event Request (0Ch): Supported 00:29:38.982 Keep Alive (18h): Supported 00:29:38.982 I/O Commands 00:29:38.982 ------------ 00:29:38.982 Flush (00h): Supported LBA-Change 00:29:38.982 Write (01h): Supported LBA-Change 00:29:38.982 Read (02h): Supported 00:29:38.982 Compare (05h): Supported 00:29:38.982 Write Zeroes (08h): Supported LBA-Change 00:29:38.982 Dataset Management (09h): Supported LBA-Change 00:29:38.982 Copy (19h): Supported LBA-Change 00:29:38.982 00:29:38.982 Error Log 00:29:38.982 ========= 00:29:38.982 00:29:38.982 Arbitration 00:29:38.982 =========== 00:29:38.982 Arbitration Burst: 1 00:29:38.982 00:29:38.983 Power Management 00:29:38.983 ================ 00:29:38.983 Number of Power States: 1 00:29:38.983 Current Power State: Power State #0 00:29:38.983 Power State #0: 00:29:38.983 Max Power: 0.00 W 00:29:38.983 Non-Operational State: Operational 00:29:38.983 Entry Latency: Not Reported 00:29:38.983 Exit Latency: Not Reported 00:29:38.983 Relative Read Throughput: 0 00:29:38.983 Relative Read Latency: 0 00:29:38.983 Relative Write Throughput: 0 00:29:38.983 Relative Write Latency: 0 00:29:38.983 Idle Power: Not Reported 00:29:38.983 Active Power: Not Reported 00:29:38.983 Non-Operational Permissive Mode: Not Supported 00:29:38.983 00:29:38.983 Health Information 00:29:38.983 ================== 00:29:38.983 Critical Warnings: 00:29:38.983 Available Spare Space: OK 00:29:38.983 Temperature: OK 00:29:38.983 Device Reliability: OK 00:29:38.983 Read Only: No 00:29:38.983 Volatile Memory Backup: OK 00:29:38.983 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:38.983 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:38.983 Available Spare: 0% 00:29:38.983 Available Spare Threshold: 0% 00:29:38.983 Life Percentage Used:[2024-12-14 16:43:08.731868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.731874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22feed0) 00:29:38.983 [2024-12-14 16:43:08.731880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.983 [2024-12-14 16:43:08.731892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236afc0, cid 7, qid 0 00:29:38.983 [2024-12-14 16:43:08.731965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.983 [2024-12-14 16:43:08.731971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.983 [2024-12-14 16:43:08.731974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.731978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236afc0) on tqpair=0x22feed0 00:29:38.983 [2024-12-14 16:43:08.732006] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:38.983 [2024-12-14 16:43:08.732015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a540) on tqpair=0x22feed0 00:29:38.983 [2024-12-14 16:43:08.732021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.983 [2024-12-14 16:43:08.732025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x22feed0 00:29:38.983 [2024-12-14 16:43:08.732029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.983 [2024-12-14 16:43:08.732033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a840) on tqpair=0x22feed0 00:29:38.983 [2024-12-14 16:43:08.732038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.983 [2024-12-14 16:43:08.732042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a9c0) on tqpair=0x22feed0 00:29:38.983 [2024-12-14 16:43:08.732046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.983 [2024-12-14 16:43:08.732052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22feed0) 00:29:38.983 [2024-12-14 16:43:08.732065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.983 [2024-12-14 16:43:08.732076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a9c0, cid 3, qid 0 00:29:38.983 [2024-12-14 16:43:08.732138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.983 [2024-12-14 16:43:08.732145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.983 [2024-12-14 16:43:08.732148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a9c0) on tqpair=0x22feed0 00:29:38.983 [2024-12-14 16:43:08.732157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22feed0) 00:29:38.983 [2024-12-14 16:43:08.732169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.983 [2024-12-14 16:43:08.732181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a9c0, cid 3, qid 0 00:29:38.983 [2024-12-14 16:43:08.732254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.983 [2024-12-14 16:43:08.732260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.983 [2024-12-14 16:43:08.732263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a9c0) on tqpair=0x22feed0 00:29:38.983 [2024-12-14 16:43:08.732274] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:38.983 [2024-12-14 16:43:08.732278] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:38.983 [2024-12-14 16:43:08.732286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22feed0) 00:29:38.983 [2024-12-14 16:43:08.732299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.983 [2024-12-14 16:43:08.732309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a9c0, cid 3, qid 0 00:29:38.983 [2024-12-14 16:43:08.732372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.983 [2024-12-14 16:43:08.732378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.983 [2024-12-14 16:43:08.732381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a9c0) on tqpair=0x22feed0 00:29:38.983 [2024-12-14 16:43:08.732393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22feed0) 00:29:38.983 [2024-12-14 16:43:08.732405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.983 [2024-12-14 16:43:08.732415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a9c0, cid 3, qid 0 00:29:38.983 [2024-12-14 16:43:08.732479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.983 [2024-12-14 16:43:08.732485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.983 [2024-12-14 16:43:08.732488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a9c0) on tqpair=0x22feed0 00:29:38.983 [2024-12-14 16:43:08.732500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.732506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22feed0) 00:29:38.983 [2024-12-14 16:43:08.732512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.983 [2024-12-14 16:43:08.732522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a9c0, cid 3, qid 0 00:29:38.983 [2024-12-14 16:43:08.736563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.983 [2024-12-14 16:43:08.736572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.983 [2024-12-14 16:43:08.736575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.736578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a9c0) on tqpair=0x22feed0 00:29:38.983 [2024-12-14 16:43:08.736587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.736591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.736594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22feed0) 00:29:38.983 [2024-12-14 16:43:08.736600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.983 [2024-12-14 16:43:08.736611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a9c0, cid 3, qid 0 00:29:38.983 [2024-12-14 16:43:08.736763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:38.983 [2024-12-14 16:43:08.736769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:38.983 [2024-12-14 16:43:08.736772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:38.983 [2024-12-14 16:43:08.736777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a9c0) on tqpair=0x22feed0 00:29:38.983 [2024-12-14 16:43:08.736783] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:29:38.983 0% 00:29:38.983 Data Units Read: 0 00:29:38.983 Data Units Written: 0 00:29:38.983 Host Read Commands: 0 00:29:38.983 Host Write Commands: 0 00:29:38.983 Controller Busy Time: 0 minutes 00:29:38.983 Power Cycles: 0 00:29:38.983 Power On Hours: 0 hours 00:29:38.983 Unsafe Shutdowns: 0 00:29:38.983 Unrecoverable Media Errors: 0 00:29:38.983 Lifetime Error Log Entries: 0 00:29:38.983 Warning Temperature Time: 0 minutes 00:29:38.983 Critical Temperature Time: 0 minutes 00:29:38.983 00:29:38.983 Number of Queues 00:29:38.983 ================ 00:29:38.983 Number of I/O Submission Queues: 127 00:29:38.983 Number of I/O Completion Queues: 127 00:29:38.983 00:29:38.983 Active Namespaces 00:29:38.983 ================= 00:29:38.983 Namespace ID:1 00:29:38.983 Error Recovery Timeout: Unlimited 00:29:38.983 Command Set Identifier: NVM (00h) 00:29:38.983 Deallocate: Supported 00:29:38.983 Deallocated/Unwritten Error: Not Supported 00:29:38.983 Deallocated Read Value: Unknown 00:29:38.983 Deallocate in Write Zeroes: Not Supported 00:29:38.983 Deallocated Guard Field: 0xFFFF 00:29:38.983 Flush: Supported 00:29:38.983 Reservation: Supported 00:29:38.983 Namespace Sharing Capabilities: Multiple Controllers 00:29:38.984 Size (in LBAs): 131072 (0GiB) 00:29:38.984 Capacity (in LBAs): 131072 (0GiB) 00:29:38.984 Utilization (in LBAs): 131072 (0GiB) 00:29:38.984 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:38.984 EUI64: ABCDEF0123456789 00:29:38.984 UUID: cc97af6e-83c0-440a-a148-b3860ea293c3 00:29:38.984 Thin Provisioning: Not Supported 00:29:38.984 Per-NS Atomic Units: Yes 00:29:38.984 Atomic Boundary Size (Normal): 0 00:29:38.984 Atomic Boundary Size (PFail): 0 00:29:38.984 Atomic Boundary Offset: 0 00:29:38.984 Maximum Single Source Range Length: 65535 00:29:38.984 Maximum Copy Length: 65535 00:29:38.984 Maximum Source Range Count: 1 00:29:38.984 NGUID/EUI64 Never Reused: No 00:29:38.984 Namespace Write Protected: No 00:29:38.984 Number of LBA Formats: 1 00:29:38.984 Current LBA Format: LBA Format #00 00:29:38.984 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:38.984 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:38.984 rmmod nvme_tcp 00:29:38.984 rmmod nvme_fabrics 00:29:38.984 rmmod nvme_keyring 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1117390 ']' 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1117390 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1117390 ']' 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1117390 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1117390 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1117390' 00:29:38.984 killing process with pid 1117390 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1117390 00:29:38.984 16:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1117390 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.243 16:43:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.148 16:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:41.148 00:29:41.148 real 0m9.121s 00:29:41.148 user 0m5.059s 00:29:41.148 sys 0m4.768s 00:29:41.148 16:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.148 16:43:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.148 ************************************ 00:29:41.148 END TEST nvmf_identify 00:29:41.148 ************************************ 00:29:41.148 16:43:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:41.148 16:43:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:41.148 16:43:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.148 16:43:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.148 ************************************ 00:29:41.148 START TEST nvmf_perf 00:29:41.148 ************************************ 00:29:41.148 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:41.408 * Looking for test storage... 00:29:41.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.408 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.409 --rc genhtml_branch_coverage=1 00:29:41.409 --rc genhtml_function_coverage=1 00:29:41.409 --rc genhtml_legend=1 00:29:41.409 --rc geninfo_all_blocks=1 00:29:41.409 --rc geninfo_unexecuted_blocks=1 00:29:41.409 00:29:41.409 ' 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.409 --rc genhtml_branch_coverage=1 00:29:41.409 --rc genhtml_function_coverage=1 00:29:41.409 --rc genhtml_legend=1 00:29:41.409 --rc geninfo_all_blocks=1 00:29:41.409 --rc geninfo_unexecuted_blocks=1 00:29:41.409 00:29:41.409 ' 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.409 --rc genhtml_branch_coverage=1 00:29:41.409 --rc genhtml_function_coverage=1 00:29:41.409 --rc genhtml_legend=1 00:29:41.409 --rc geninfo_all_blocks=1 00:29:41.409 --rc geninfo_unexecuted_blocks=1 00:29:41.409 00:29:41.409 ' 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.409 --rc genhtml_branch_coverage=1 00:29:41.409 --rc genhtml_function_coverage=1 00:29:41.409 --rc genhtml_legend=1 00:29:41.409 --rc geninfo_all_blocks=1 00:29:41.409 --rc geninfo_unexecuted_blocks=1 00:29:41.409 00:29:41.409 ' 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:41.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:41.409 16:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.977 16:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:47.977 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:47.977 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.977 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:47.978 Found net devices under 0000:af:00.0: cvl_0_0 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:47.978 Found net devices under 0000:af:00.1: cvl_0_1 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:29:47.978 00:29:47.978 --- 10.0.0.2 ping statistics --- 00:29:47.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.978 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:29:47.978 00:29:47.978 --- 10.0.0.1 ping statistics --- 00:29:47.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.978 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1121081 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1121081 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1121081 ']' 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.978 [2024-12-14 16:43:17.345131] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:47.978 [2024-12-14 16:43:17.345179] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.978 [2024-12-14 16:43:17.423841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.978 [2024-12-14 16:43:17.447006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.978 [2024-12-14 16:43:17.447045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.978 [2024-12-14 16:43:17.447051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.978 [2024-12-14 16:43:17.447057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.978 [2024-12-14 16:43:17.447062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.978 [2024-12-14 16:43:17.448578] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.978 [2024-12-14 16:43:17.448693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.978 [2024-12-14 16:43:17.448779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.978 [2024-12-14 16:43:17.448779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:47.978 16:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:51.266 16:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:51.266 16:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:51.266 16:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:29:51.266 16:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:51.266 16:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:51.266 16:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:29:51.266 16:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:51.266 16:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:51.266 16:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:51.266 [2024-12-14 16:43:21.223343] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.266 16:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.525 16:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:51.525 16:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.784 16:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:51.784 16:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:51.784 16:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.043 [2024-12-14 16:43:22.014208] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.043 16:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:52.301 16:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:29:52.301 16:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:52.301 16:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:52.301 16:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:53.678 Initializing NVMe Controllers 00:29:53.678 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:29:53.678 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:29:53.678 Initialization complete. Launching workers. 00:29:53.678 ======================================================== 00:29:53.678 Latency(us) 00:29:53.678 Device Information : IOPS MiB/s Average min max 00:29:53.678 PCIE (0000:5e:00.0) NSID 1 from core 0: 98805.36 385.96 323.37 34.50 4597.88 00:29:53.678 ======================================================== 00:29:53.678 Total : 98805.36 385.96 323.37 34.50 4597.88 00:29:53.678 00:29:53.678 16:43:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:54.614 Initializing NVMe Controllers 00:29:54.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:54.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:54.614 Initialization complete. Launching workers. 00:29:54.614 ======================================================== 00:29:54.614 Latency(us) 00:29:54.614 Device Information : IOPS MiB/s Average min max 00:29:54.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.00 0.38 10645.24 105.37 45684.63 00:29:54.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 52.00 0.20 19456.46 7212.81 47885.71 00:29:54.614 ======================================================== 00:29:54.614 Total : 148.00 0.58 13741.08 105.37 47885.71 00:29:54.614 00:29:54.614 16:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:55.996 Initializing NVMe Controllers 00:29:55.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:55.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:55.997 Initialization complete. Launching workers. 00:29:55.997 ======================================================== 00:29:55.997 Latency(us) 00:29:55.997 Device Information : IOPS MiB/s Average min max 00:29:55.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11233.59 43.88 2847.74 492.15 8828.02 00:29:55.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3762.13 14.70 8518.01 6669.22 16100.52 00:29:55.997 ======================================================== 00:29:55.997 Total : 14995.73 58.58 4270.30 492.15 16100.52 00:29:55.997 00:29:55.997 16:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:55.997 16:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:55.997 16:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:58.530 Initializing NVMe Controllers 00:29:58.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.531 Controller IO queue size 128, less than required. 00:29:58.531 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.531 Controller IO queue size 128, less than required. 00:29:58.531 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:58.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:58.531 Initialization complete. Launching workers. 00:29:58.531 ======================================================== 00:29:58.531 Latency(us) 00:29:58.531 Device Information : IOPS MiB/s Average min max 00:29:58.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1868.51 467.13 69627.01 50619.97 117639.60 00:29:58.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.76 144.94 227815.10 79867.72 321966.53 00:29:58.531 ======================================================== 00:29:58.531 Total : 2448.28 612.07 107086.67 50619.97 321966.53 00:29:58.531 00:29:58.531 16:43:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:58.789 No valid NVMe controllers or AIO or URING devices found 00:29:58.789 Initializing NVMe Controllers 00:29:58.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.789 Controller IO queue size 128, less than required. 00:29:58.789 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.789 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:58.789 Controller IO queue size 128, less than required. 00:29:58.789 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.789 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:58.789 WARNING: Some requested NVMe devices were skipped 00:29:58.789 16:43:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:01.324 Initializing NVMe Controllers 00:30:01.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.324 Controller IO queue size 128, less than required. 00:30:01.324 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.324 Controller IO queue size 128, less than required. 00:30:01.324 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:01.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:01.324 Initialization complete. Launching workers. 00:30:01.324 00:30:01.324 ==================== 00:30:01.324 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:01.324 TCP transport: 00:30:01.324 polls: 9380 00:30:01.324 idle_polls: 5487 00:30:01.324 sock_completions: 3893 00:30:01.324 nvme_completions: 6101 00:30:01.324 submitted_requests: 9196 00:30:01.324 queued_requests: 1 00:30:01.324 00:30:01.324 ==================== 00:30:01.324 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:01.324 TCP transport: 00:30:01.324 polls: 17012 00:30:01.324 idle_polls: 12790 00:30:01.324 sock_completions: 4222 00:30:01.324 nvme_completions: 6587 00:30:01.324 submitted_requests: 9872 00:30:01.324 queued_requests: 1 00:30:01.324 ======================================================== 00:30:01.324 Latency(us) 00:30:01.324 Device Information : IOPS MiB/s Average min max 00:30:01.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1524.94 381.23 86455.71 58869.84 144913.23 00:30:01.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1646.43 411.61 78926.73 43243.51 140135.56 00:30:01.324 ======================================================== 00:30:01.324 Total : 3171.37 792.84 82547.01 43243.51 144913.23 00:30:01.324 00:30:01.324 16:43:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:01.324 16:43:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.583 16:43:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:01.583 16:43:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:01.583 16:43:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:04.870 16:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=12a07598-3db7-438b-a639-b0f28a6b7521 00:30:04.870 16:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 12a07598-3db7-438b-a639-b0f28a6b7521 00:30:04.870 16:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=12a07598-3db7-438b-a639-b0f28a6b7521 00:30:04.870 16:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:04.870 16:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:04.870 16:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:04.870 16:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:05.134 16:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:05.134 { 00:30:05.134 "uuid": "12a07598-3db7-438b-a639-b0f28a6b7521", 00:30:05.134 "name": "lvs_0", 00:30:05.134 "base_bdev": "Nvme0n1", 00:30:05.134 "total_data_clusters": 238234, 00:30:05.134 "free_clusters": 238234, 00:30:05.134 "block_size": 512, 00:30:05.135 "cluster_size": 4194304 00:30:05.135 } 00:30:05.135 ]' 00:30:05.135 16:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="12a07598-3db7-438b-a639-b0f28a6b7521") .free_clusters' 00:30:05.135 16:43:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:05.135 16:43:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="12a07598-3db7-438b-a639-b0f28a6b7521") .cluster_size' 00:30:05.135 16:43:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:05.135 16:43:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:05.135 16:43:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:05.135 952936 00:30:05.135 16:43:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:05.135 16:43:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:05.135 16:43:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 12a07598-3db7-438b-a639-b0f28a6b7521 lbd_0 20480 00:30:05.395 16:43:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=cb2fcf5a-e85f-49d0-8d6e-ccc084254960 00:30:05.395 16:43:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore cb2fcf5a-e85f-49d0-8d6e-ccc084254960 lvs_n_0 00:30:06.329 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=3e739602-d4f1-4698-ba5c-9ab47dfbaa40 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 3e739602-d4f1-4698-ba5c-9ab47dfbaa40 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=3e739602-d4f1-4698-ba5c-9ab47dfbaa40 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:06.330 { 00:30:06.330 "uuid": "12a07598-3db7-438b-a639-b0f28a6b7521", 00:30:06.330 "name": "lvs_0", 00:30:06.330 "base_bdev": "Nvme0n1", 00:30:06.330 "total_data_clusters": 238234, 00:30:06.330 "free_clusters": 233114, 00:30:06.330 "block_size": 512, 00:30:06.330 "cluster_size": 4194304 00:30:06.330 }, 00:30:06.330 { 00:30:06.330 "uuid": "3e739602-d4f1-4698-ba5c-9ab47dfbaa40", 00:30:06.330 "name": "lvs_n_0", 00:30:06.330 "base_bdev": "cb2fcf5a-e85f-49d0-8d6e-ccc084254960", 00:30:06.330 "total_data_clusters": 5114, 00:30:06.330 "free_clusters": 5114, 00:30:06.330 "block_size": 512, 00:30:06.330 "cluster_size": 4194304 00:30:06.330 } 00:30:06.330 ]' 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3e739602-d4f1-4698-ba5c-9ab47dfbaa40") .free_clusters' 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3e739602-d4f1-4698-ba5c-9ab47dfbaa40") .cluster_size' 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:06.330 20456 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:06.330 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3e739602-d4f1-4698-ba5c-9ab47dfbaa40 lbd_nest_0 20456 00:30:06.588 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=33228bde-06ff-446f-8c17-e9de9d88a53c 00:30:06.588 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.847 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:06.847 16:43:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 33228bde-06ff-446f-8c17-e9de9d88a53c 00:30:07.160 16:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.160 16:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:07.160 16:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:07.160 16:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:07.160 16:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:07.160 16:43:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:19.424 Initializing NVMe Controllers 00:30:19.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:19.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:19.424 Initialization complete. Launching workers. 00:30:19.424 ======================================================== 00:30:19.424 Latency(us) 00:30:19.424 Device Information : IOPS MiB/s Average min max 00:30:19.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.39 0.02 21115.41 127.11 45692.48 00:30:19.424 ======================================================== 00:30:19.424 Total : 47.39 0.02 21115.41 127.11 45692.48 00:30:19.424 00:30:19.424 16:43:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:19.424 16:43:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:29.397 Initializing NVMe Controllers 00:30:29.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:29.397 Initialization complete. Launching workers. 00:30:29.397 ======================================================== 00:30:29.397 Latency(us) 00:30:29.397 Device Information : IOPS MiB/s Average min max 00:30:29.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 70.38 8.80 14218.89 3991.81 50875.75 00:30:29.397 ======================================================== 00:30:29.397 Total : 70.38 8.80 14218.89 3991.81 50875.75 00:30:29.397 00:30:29.397 16:43:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:29.397 16:43:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:29.397 16:43:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:39.376 Initializing NVMe Controllers 00:30:39.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:39.376 Initialization complete. Launching workers. 00:30:39.376 ======================================================== 00:30:39.376 Latency(us) 00:30:39.376 Device Information : IOPS MiB/s Average min max 00:30:39.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8668.49 4.23 3699.43 263.39 43189.59 00:30:39.376 ======================================================== 00:30:39.376 Total : 8668.49 4.23 3699.43 263.39 43189.59 00:30:39.376 00:30:39.376 16:44:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:39.376 16:44:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:49.354 Initializing NVMe Controllers 00:30:49.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:49.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:49.354 Initialization complete. Launching workers. 00:30:49.354 ======================================================== 00:30:49.354 Latency(us) 00:30:49.354 Device Information : IOPS MiB/s Average min max 00:30:49.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3634.80 454.35 8803.86 629.45 21352.95 00:30:49.354 ======================================================== 00:30:49.354 Total : 3634.80 454.35 8803.86 629.45 21352.95 00:30:49.354 00:30:49.354 16:44:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:49.354 16:44:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:49.354 16:44:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:59.331 Initializing NVMe Controllers 00:30:59.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.331 Controller IO queue size 128, less than required. 00:30:59.331 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:59.331 Initialization complete. Launching workers. 00:30:59.331 ======================================================== 00:30:59.331 Latency(us) 00:30:59.331 Device Information : IOPS MiB/s Average min max 00:30:59.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15913.60 7.77 8047.14 1286.67 48712.78 00:30:59.331 ======================================================== 00:30:59.331 Total : 15913.60 7.77 8047.14 1286.67 48712.78 00:30:59.331 00:30:59.331 16:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:59.331 16:44:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:09.311 Initializing NVMe Controllers 00:31:09.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:09.311 Controller IO queue size 128, less than required. 00:31:09.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:09.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:09.311 Initialization complete. Launching workers. 00:31:09.311 ======================================================== 00:31:09.311 Latency(us) 00:31:09.311 Device Information : IOPS MiB/s Average min max 00:31:09.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1204.60 150.57 106361.69 11988.33 221875.86 00:31:09.311 ======================================================== 00:31:09.311 Total : 1204.60 150.57 106361.69 11988.33 221875.86 00:31:09.311 00:31:09.311 16:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.569 16:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 33228bde-06ff-446f-8c17-e9de9d88a53c 00:31:10.135 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:10.393 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cb2fcf5a-e85f-49d0-8d6e-ccc084254960 00:31:10.652 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:10.911 rmmod nvme_tcp 00:31:10.911 rmmod nvme_fabrics 00:31:10.911 rmmod nvme_keyring 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1121081 ']' 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1121081 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1121081 ']' 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1121081 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1121081 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1121081' 00:31:10.911 killing process with pid 1121081 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1121081 00:31:10.911 16:44:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1121081 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.815 16:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.721 00:31:14.721 real 1m33.331s 00:31:14.721 user 5m33.456s 00:31:14.721 sys 0m16.591s 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:14.721 ************************************ 00:31:14.721 END TEST nvmf_perf 00:31:14.721 ************************************ 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.721 ************************************ 00:31:14.721 START TEST nvmf_fio_host 00:31:14.721 ************************************ 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:14.721 * Looking for test storage... 00:31:14.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:14.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.721 --rc genhtml_branch_coverage=1 00:31:14.721 --rc genhtml_function_coverage=1 00:31:14.721 --rc genhtml_legend=1 00:31:14.721 --rc geninfo_all_blocks=1 00:31:14.721 --rc geninfo_unexecuted_blocks=1 00:31:14.721 00:31:14.721 ' 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:14.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.721 --rc genhtml_branch_coverage=1 00:31:14.721 --rc genhtml_function_coverage=1 00:31:14.721 --rc genhtml_legend=1 00:31:14.721 --rc geninfo_all_blocks=1 00:31:14.721 --rc geninfo_unexecuted_blocks=1 00:31:14.721 00:31:14.721 ' 00:31:14.721 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:14.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.721 --rc genhtml_branch_coverage=1 00:31:14.721 --rc genhtml_function_coverage=1 00:31:14.721 --rc genhtml_legend=1 00:31:14.721 --rc geninfo_all_blocks=1 00:31:14.722 --rc geninfo_unexecuted_blocks=1 00:31:14.722 00:31:14.722 ' 00:31:14.722 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:14.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.722 --rc genhtml_branch_coverage=1 00:31:14.722 --rc genhtml_function_coverage=1 00:31:14.722 --rc genhtml_legend=1 00:31:14.722 --rc geninfo_all_blocks=1 00:31:14.722 --rc geninfo_unexecuted_blocks=1 00:31:14.722 00:31:14.722 ' 00:31:14.722 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.722 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.981 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:14.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:14.982 16:44:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:21.550 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.550 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:21.551 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:21.551 Found net devices under 0000:af:00.0: cvl_0_0 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:21.551 Found net devices under 0000:af:00.1: cvl_0_1 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:31:21.551 00:31:21.551 --- 10.0.0.2 ping statistics --- 00:31:21.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.551 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:31:21.551 00:31:21.551 --- 10.0.0.1 ping statistics --- 00:31:21.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.551 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1137972 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1137972 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1137972 ']' 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:21.551 16:44:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.551 [2024-12-14 16:44:50.930171] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:31:21.551 [2024-12-14 16:44:50.930213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.551 [2024-12-14 16:44:51.007648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:21.551 [2024-12-14 16:44:51.031449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.551 [2024-12-14 16:44:51.031486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.551 [2024-12-14 16:44:51.031494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.551 [2024-12-14 16:44:51.031500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.551 [2024-12-14 16:44:51.031505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.551 [2024-12-14 16:44:51.032943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.551 [2024-12-14 16:44:51.033052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.551 [2024-12-14 16:44:51.033158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.551 [2024-12-14 16:44:51.033159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:21.551 16:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:21.551 16:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:21.551 16:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:21.551 [2024-12-14 16:44:51.285170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.551 16:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:21.551 16:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:21.551 16:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.551 16:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:21.551 Malloc1 00:31:21.552 16:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:21.810 16:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:22.069 16:44:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.327 [2024-12-14 16:44:52.163892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:22.327 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:22.599 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:22.599 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:22.599 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.599 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.599 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:22.599 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:22.599 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:22.599 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:22.599 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:22.599 16:44:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.859 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:22.859 fio-3.35 00:31:22.859 Starting 1 thread 00:31:25.400 00:31:25.400 test: (groupid=0, jobs=1): err= 0: pid=1138346: Sat Dec 14 16:44:55 2024 00:31:25.400 read: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(93.0MiB/2005msec) 00:31:25.400 slat (nsec): min=1529, max=239308, avg=1741.44, stdev=1955.97 00:31:25.400 clat (usec): min=2339, max=10591, avg=5937.30, stdev=439.93 00:31:25.400 lat (usec): min=2375, max=10593, avg=5939.04, stdev=439.78 00:31:25.400 clat percentiles (usec): 00:31:25.400 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:31:25.400 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:31:25.400 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:31:25.400 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 8717], 99.95th=[ 9372], 00:31:25.400 | 99.99th=[10159] 00:31:25.400 bw ( KiB/s): min=46168, max=48128, per=99.96%, avg=47472.00, stdev=923.97, samples=4 00:31:25.400 iops : min=11542, max=12032, avg=11868.00, stdev=230.99, samples=4 00:31:25.400 write: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(92.6MiB/2005msec); 0 zone resets 00:31:25.400 slat (nsec): min=1576, max=153899, avg=1793.81, stdev=1152.96 00:31:25.400 clat (usec): min=1900, max=8897, avg=4798.33, stdev=367.69 00:31:25.400 lat (usec): min=1915, max=8899, avg=4800.12, stdev=367.60 00:31:25.400 clat percentiles (usec): 00:31:25.400 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:31:25.400 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:31:25.400 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:31:25.400 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7111], 99.95th=[ 8094], 00:31:25.400 | 99.99th=[ 8848] 00:31:25.400 bw ( KiB/s): min=46656, max=47808, per=100.00%, avg=47280.00, stdev=528.73, samples=4 00:31:25.400 iops : min=11664, max=11952, avg=11820.00, stdev=132.18, samples=4 00:31:25.400 lat (msec) : 2=0.01%, 4=0.63%, 10=99.35%, 20=0.01% 00:31:25.400 cpu : usr=72.65%, sys=26.40%, ctx=95, majf=0, minf=3 00:31:25.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:25.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:25.400 issued rwts: total=23804,23699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.400 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:25.400 00:31:25.400 Run status group 0 (all jobs): 00:31:25.400 READ: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=93.0MiB (97.5MB), run=2005-2005msec 00:31:25.400 WRITE: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=92.6MiB (97.1MB), run=2005-2005msec 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:25.400 16:44:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:25.659 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:25.659 fio-3.35 00:31:25.659 Starting 1 thread 00:31:27.028 [2024-12-14 16:44:56.987796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2404fd0 is same with the state(6) to be set 00:31:27.028 [2024-12-14 16:44:56.987853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2404fd0 is same with the state(6) to be set 00:31:27.958 00:31:27.958 test: (groupid=0, jobs=1): err= 0: pid=1138907: Sat Dec 14 16:44:57 2024 00:31:27.958 read: IOPS=10.7k, BW=167MiB/s (175MB/s)(334MiB/2007msec) 00:31:27.958 slat (nsec): min=2447, max=92443, avg=2856.04, stdev=1463.21 00:31:27.958 clat (usec): min=1898, max=51158, avg=6967.40, stdev=3527.96 00:31:27.958 lat (usec): min=1900, max=51161, avg=6970.25, stdev=3527.99 00:31:27.958 clat percentiles (usec): 00:31:27.958 | 1.00th=[ 3654], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5276], 00:31:27.958 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:31:27.958 | 70.00th=[ 7504], 80.00th=[ 8029], 90.00th=[ 8717], 95.00th=[ 9503], 00:31:27.958 | 99.00th=[11600], 99.50th=[45351], 99.90th=[50070], 99.95th=[50594], 00:31:27.958 | 99.99th=[51119] 00:31:27.958 bw ( KiB/s): min=79456, max=97916, per=51.18%, avg=87319.00, stdev=7719.32, samples=4 00:31:27.958 iops : min= 4966, max= 6119, avg=5457.25, stdev=482.11, samples=4 00:31:27.958 write: IOPS=6514, BW=102MiB/s (107MB/s)(179MiB/1754msec); 0 zone resets 00:31:27.958 slat (usec): min=28, max=297, avg=32.06, stdev= 6.21 00:31:27.958 clat (usec): min=2990, max=14005, avg=8567.57, stdev=1451.75 00:31:27.958 lat (usec): min=3023, max=14036, avg=8599.63, stdev=1452.41 00:31:27.958 clat percentiles (usec): 00:31:27.958 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 00:31:27.958 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:31:27.958 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:31:27.958 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13173], 99.95th=[13566], 00:31:27.958 | 99.99th=[13960] 00:31:27.958 bw ( KiB/s): min=83232, max=102195, per=87.28%, avg=90972.75, stdev=8019.88, samples=4 00:31:27.958 iops : min= 5202, max= 6387, avg=5685.75, stdev=501.15, samples=4 00:31:27.958 lat (msec) : 2=0.01%, 4=1.87%, 10=89.89%, 20=7.84%, 50=0.32% 00:31:27.958 lat (msec) : 100=0.06% 00:31:27.958 cpu : usr=82.86%, sys=14.80%, ctx=187, majf=0, minf=3 00:31:27.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:27.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:27.958 issued rwts: total=21402,11426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:27.958 00:31:27.958 Run status group 0 (all jobs): 00:31:27.958 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=334MiB (351MB), run=2007-2007msec 00:31:27.958 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=179MiB (187MB), run=1754-1754msec 00:31:27.958 16:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:28.215 16:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:28.215 16:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:28.215 16:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:28.215 16:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:28.215 16:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:28.215 16:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:28.215 16:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:28.215 16:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:28.215 16:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:28.215 16:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:28.215 16:44:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:31.487 Nvme0n1 00:31:31.487 16:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=784a39d8-7b2d-4ba4-8d1d-e0788f6e68d9 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 784a39d8-7b2d-4ba4-8d1d-e0788f6e68d9 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=784a39d8-7b2d-4ba4-8d1d-e0788f6e68d9 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:34.758 { 00:31:34.758 "uuid": "784a39d8-7b2d-4ba4-8d1d-e0788f6e68d9", 00:31:34.758 "name": "lvs_0", 00:31:34.758 "base_bdev": "Nvme0n1", 00:31:34.758 "total_data_clusters": 930, 00:31:34.758 "free_clusters": 930, 00:31:34.758 "block_size": 512, 00:31:34.758 "cluster_size": 1073741824 00:31:34.758 } 00:31:34.758 ]' 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="784a39d8-7b2d-4ba4-8d1d-e0788f6e68d9") .free_clusters' 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="784a39d8-7b2d-4ba4-8d1d-e0788f6e68d9") .cluster_size' 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:34.758 952320 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:34.758 54c56573-689e-46cb-8cd3-683524f4db7a 00:31:34.758 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:35.015 16:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:35.271 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:35.532 16:45:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.825 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:35.825 fio-3.35 00:31:35.825 Starting 1 thread 00:31:38.389 00:31:38.389 test: (groupid=0, jobs=1): err= 0: pid=1140902: Sat Dec 14 16:45:08 2024 00:31:38.389 read: IOPS=8102, BW=31.6MiB/s (33.2MB/s)(63.5MiB/2006msec) 00:31:38.389 slat (nsec): min=1526, max=102346, avg=1722.93, stdev=1173.07 00:31:38.389 clat (usec): min=583, max=169948, avg=8654.01, stdev=10255.64 00:31:38.389 lat (usec): min=585, max=169969, avg=8655.73, stdev=10255.82 00:31:38.389 clat percentiles (msec): 00:31:38.389 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:31:38.389 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:31:38.389 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:31:38.389 | 99.00th=[ 10], 99.50th=[ 12], 99.90th=[ 169], 99.95th=[ 169], 00:31:38.389 | 99.99th=[ 171] 00:31:38.389 bw ( KiB/s): min=23080, max=35712, per=99.94%, avg=32388.00, stdev=6207.55, samples=4 00:31:38.389 iops : min= 5770, max= 8928, avg=8097.00, stdev=1551.89, samples=4 00:31:38.389 write: IOPS=8100, BW=31.6MiB/s (33.2MB/s)(63.5MiB/2006msec); 0 zone resets 00:31:38.390 slat (nsec): min=1570, max=86127, avg=1787.28, stdev=780.99 00:31:38.390 clat (usec): min=213, max=168510, avg=7050.94, stdev=9571.01 00:31:38.390 lat (usec): min=215, max=168515, avg=7052.73, stdev=9571.20 00:31:38.390 clat percentiles (msec): 00:31:38.390 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:31:38.390 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:38.390 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:31:38.390 | 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 169], 99.95th=[ 169], 00:31:38.390 | 99.99th=[ 169] 00:31:38.390 bw ( KiB/s): min=24040, max=35328, per=99.88%, avg=32362.00, stdev=5550.77, samples=4 00:31:38.390 iops : min= 6010, max= 8832, avg=8090.50, stdev=1387.69, samples=4 00:31:38.390 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:38.390 lat (msec) : 2=0.04%, 4=0.25%, 10=99.02%, 20=0.27%, 250=0.39% 00:31:38.390 cpu : usr=72.72%, sys=26.48%, ctx=94, majf=0, minf=3 00:31:38.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:38.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.390 issued rwts: total=16253,16249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.390 00:31:38.390 Run status group 0 (all jobs): 00:31:38.390 READ: bw=31.6MiB/s (33.2MB/s), 31.6MiB/s-31.6MiB/s (33.2MB/s-33.2MB/s), io=63.5MiB (66.6MB), run=2006-2006msec 00:31:38.390 WRITE: bw=31.6MiB/s (33.2MB/s), 31.6MiB/s-31.6MiB/s (33.2MB/s-33.2MB/s), io=63.5MiB (66.6MB), run=2006-2006msec 00:31:38.390 16:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:38.390 16:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:39.761 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=c3174cc8-7581-4ca0-af92-13c6be6fc06c 00:31:39.761 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb c3174cc8-7581-4ca0-af92-13c6be6fc06c 00:31:39.761 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=c3174cc8-7581-4ca0-af92-13c6be6fc06c 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:39.762 { 00:31:39.762 "uuid": "784a39d8-7b2d-4ba4-8d1d-e0788f6e68d9", 00:31:39.762 "name": "lvs_0", 00:31:39.762 "base_bdev": "Nvme0n1", 00:31:39.762 "total_data_clusters": 930, 00:31:39.762 "free_clusters": 0, 00:31:39.762 "block_size": 512, 00:31:39.762 "cluster_size": 1073741824 00:31:39.762 }, 00:31:39.762 { 00:31:39.762 "uuid": "c3174cc8-7581-4ca0-af92-13c6be6fc06c", 00:31:39.762 "name": "lvs_n_0", 00:31:39.762 "base_bdev": "54c56573-689e-46cb-8cd3-683524f4db7a", 00:31:39.762 "total_data_clusters": 237847, 00:31:39.762 "free_clusters": 237847, 00:31:39.762 "block_size": 512, 00:31:39.762 "cluster_size": 4194304 00:31:39.762 } 00:31:39.762 ]' 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="c3174cc8-7581-4ca0-af92-13c6be6fc06c") .free_clusters' 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="c3174cc8-7581-4ca0-af92-13c6be6fc06c") .cluster_size' 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:39.762 951388 00:31:39.762 16:45:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:40.326 7ed17fc3-b9ba-405e-9c45-10eb03dcadb1 00:31:40.326 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:40.584 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:40.841 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:40.841 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:40.841 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:40.841 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:40.841 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.841 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:40.842 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:41.099 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:41.099 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:41.099 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:41.099 16:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:41.359 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:41.359 fio-3.35 00:31:41.359 Starting 1 thread 00:31:43.887 00:31:43.887 test: (groupid=0, jobs=1): err= 0: pid=1142142: Sat Dec 14 16:45:13 2024 00:31:43.887 read: IOPS=7772, BW=30.4MiB/s (31.8MB/s)(62.2MiB/2047msec) 00:31:43.887 slat (nsec): min=1507, max=99669, avg=1674.76, stdev=1134.07 00:31:43.887 clat (usec): min=2897, max=56453, avg=9070.40, stdev=2947.55 00:31:43.887 lat (usec): min=2901, max=56454, avg=9072.07, stdev=2947.53 00:31:43.887 clat percentiles (usec): 00:31:43.887 | 1.00th=[ 7177], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8291], 00:31:43.887 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:31:43.887 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:31:43.887 | 99.00th=[10552], 99.50th=[11076], 99.90th=[54264], 99.95th=[54789], 00:31:43.887 | 99.99th=[55837] 00:31:43.887 bw ( KiB/s): min=30440, max=32168, per=100.00%, avg=31680.00, stdev=830.43, samples=4 00:31:43.887 iops : min= 7610, max= 8042, avg=7920.00, stdev=207.61, samples=4 00:31:43.887 write: IOPS=7746, BW=30.3MiB/s (31.7MB/s)(61.9MiB/2047msec); 0 zone resets 00:31:43.887 slat (nsec): min=1570, max=80600, avg=1767.27, stdev=770.54 00:31:43.887 clat (usec): min=1350, max=54379, avg=7317.35, stdev=2656.94 00:31:43.887 lat (usec): min=1356, max=54381, avg=7319.11, stdev=2656.93 00:31:43.887 clat percentiles (usec): 00:31:43.887 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6456], 20.00th=[ 6652], 00:31:43.887 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:31:43.887 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8094], 00:31:43.887 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[52691], 99.95th=[52691], 00:31:43.887 | 99.99th=[54264] 00:31:43.887 bw ( KiB/s): min=31496, max=31720, per=100.00%, avg=31602.00, stdev=93.15, samples=4 00:31:43.887 iops : min= 7874, max= 7930, avg=7900.50, stdev=23.29, samples=4 00:31:43.887 lat (msec) : 2=0.01%, 4=0.13%, 10=96.69%, 20=2.79%, 50=0.14% 00:31:43.887 lat (msec) : 100=0.26% 00:31:43.887 cpu : usr=68.43%, sys=30.79%, ctx=116, majf=0, minf=3 00:31:43.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:43.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:43.887 issued rwts: total=15911,15857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:43.887 00:31:43.887 Run status group 0 (all jobs): 00:31:43.887 READ: bw=30.4MiB/s (31.8MB/s), 30.4MiB/s-30.4MiB/s (31.8MB/s-31.8MB/s), io=62.2MiB (65.2MB), run=2047-2047msec 00:31:43.887 WRITE: bw=30.3MiB/s (31.7MB/s), 30.3MiB/s-30.3MiB/s (31.7MB/s-31.7MB/s), io=61.9MiB (64.9MB), run=2047-2047msec 00:31:43.887 16:45:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:43.887 16:45:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:43.887 16:45:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:48.066 16:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:48.066 16:45:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:50.592 16:45:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:50.849 16:45:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.748 rmmod nvme_tcp 00:31:52.748 rmmod nvme_fabrics 00:31:52.748 rmmod nvme_keyring 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1137972 ']' 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1137972 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1137972 ']' 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1137972 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1137972 00:31:52.748 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:52.749 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:52.749 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1137972' 00:31:52.749 killing process with pid 1137972 00:31:52.749 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1137972 00:31:52.749 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1137972 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.008 16:45:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.914 16:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.914 00:31:54.914 real 0m40.354s 00:31:54.914 user 2m41.507s 00:31:54.914 sys 0m9.006s 00:31:54.914 16:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.914 16:45:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.914 ************************************ 00:31:54.914 END TEST nvmf_fio_host 00:31:54.914 ************************************ 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.174 ************************************ 00:31:55.174 START TEST nvmf_failover 00:31:55.174 ************************************ 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:55.174 * Looking for test storage... 00:31:55.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.174 --rc genhtml_branch_coverage=1 00:31:55.174 --rc genhtml_function_coverage=1 00:31:55.174 --rc genhtml_legend=1 00:31:55.174 --rc geninfo_all_blocks=1 00:31:55.174 --rc geninfo_unexecuted_blocks=1 00:31:55.174 00:31:55.174 ' 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.174 --rc genhtml_branch_coverage=1 00:31:55.174 --rc genhtml_function_coverage=1 00:31:55.174 --rc genhtml_legend=1 00:31:55.174 --rc geninfo_all_blocks=1 00:31:55.174 --rc geninfo_unexecuted_blocks=1 00:31:55.174 00:31:55.174 ' 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.174 --rc genhtml_branch_coverage=1 00:31:55.174 --rc genhtml_function_coverage=1 00:31:55.174 --rc genhtml_legend=1 00:31:55.174 --rc geninfo_all_blocks=1 00:31:55.174 --rc geninfo_unexecuted_blocks=1 00:31:55.174 00:31:55.174 ' 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:55.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.174 --rc genhtml_branch_coverage=1 00:31:55.174 --rc genhtml_function_coverage=1 00:31:55.174 --rc genhtml_legend=1 00:31:55.174 --rc geninfo_all_blocks=1 00:31:55.174 --rc geninfo_unexecuted_blocks=1 00:31:55.174 00:31:55.174 ' 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.174 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.175 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.175 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:55.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:55.175 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:55.175 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:55.175 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:55.175 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:55.433 16:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:02.002 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:02.003 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:02.003 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:02.003 Found net devices under 0000:af:00.0: cvl_0_0 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:02.003 Found net devices under 0000:af:00.1: cvl_0_1 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.003 16:45:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:02.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:32:02.003 00:32:02.003 --- 10.0.0.2 ping statistics --- 00:32:02.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.003 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:32:02.003 00:32:02.003 --- 10.0.0.1 ping statistics --- 00:32:02.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.003 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1147382 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1147382 00:32:02.003 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1147382 ']' 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:02.004 [2024-12-14 16:45:31.203318] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:02.004 [2024-12-14 16:45:31.203362] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.004 [2024-12-14 16:45:31.278364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:02.004 [2024-12-14 16:45:31.300049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.004 [2024-12-14 16:45:31.300086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.004 [2024-12-14 16:45:31.300093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.004 [2024-12-14 16:45:31.300099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.004 [2024-12-14 16:45:31.300104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.004 [2024-12-14 16:45:31.301322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:02.004 [2024-12-14 16:45:31.301431] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.004 [2024-12-14 16:45:31.301432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:02.004 [2024-12-14 16:45:31.592612] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:02.004 Malloc0 00:32:02.004 16:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:02.004 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:02.262 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.519 [2024-12-14 16:45:32.410339] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.519 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:02.519 [2024-12-14 16:45:32.602876] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:02.777 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:02.777 [2024-12-14 16:45:32.783497] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:02.777 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:02.777 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1147633 00:32:02.777 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.777 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1147633 /var/tmp/bdevperf.sock 00:32:02.777 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1147633 ']' 00:32:02.777 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:02.777 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.777 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:02.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:02.777 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.777 16:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:03.035 16:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:03.035 16:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:03.035 16:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:03.600 NVMe0n1 00:32:03.600 16:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:03.858 00:32:03.858 16:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:03.858 16:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1147856 00:32:03.858 16:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:04.790 16:45:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.048 [2024-12-14 16:45:34.922511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.048 [2024-12-14 16:45:34.922564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.048 [2024-12-14 16:45:34.922572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 [2024-12-14 16:45:34.922840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff1ac0 is same with the state(6) to be set 00:32:05.049 16:45:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:08.327 16:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:08.327 00:32:08.585 16:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:08.585 16:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:11.864 16:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.864 [2024-12-14 16:45:41.820124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.864 16:45:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:12.798 16:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:13.056 [2024-12-14 16:45:43.043047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 [2024-12-14 16:45:43.043169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3690 is same with the state(6) to be set 00:32:13.056 16:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1147856 00:32:19.621 { 00:32:19.621 "results": [ 00:32:19.621 { 00:32:19.621 "job": "NVMe0n1", 00:32:19.621 "core_mask": "0x1", 00:32:19.621 "workload": "verify", 00:32:19.621 "status": "finished", 00:32:19.621 "verify_range": { 00:32:19.621 "start": 0, 00:32:19.621 "length": 16384 00:32:19.621 }, 00:32:19.621 "queue_depth": 128, 00:32:19.621 "io_size": 4096, 00:32:19.621 "runtime": 15.004159, 00:32:19.621 "iops": 11319.461490644027, 00:32:19.621 "mibps": 44.21664644782823, 00:32:19.621 "io_failed": 4293, 00:32:19.621 "io_timeout": 0, 00:32:19.621 "avg_latency_us": 11007.31199823232, 00:32:19.621 "min_latency_us": 415.45142857142855, 00:32:19.621 "max_latency_us": 21970.16380952381 00:32:19.621 } 00:32:19.621 ], 00:32:19.621 "core_count": 1 00:32:19.621 } 00:32:19.621 16:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1147633 00:32:19.621 16:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1147633 ']' 00:32:19.621 16:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1147633 00:32:19.621 16:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:19.621 16:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:19.621 16:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1147633 00:32:19.621 16:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:19.621 16:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:19.621 16:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1147633' 00:32:19.621 killing process with pid 1147633 00:32:19.621 16:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1147633 00:32:19.621 16:45:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1147633 00:32:19.621 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:19.621 [2024-12-14 16:45:32.855108] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:19.621 [2024-12-14 16:45:32.855160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147633 ] 00:32:19.621 [2024-12-14 16:45:32.933114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.621 [2024-12-14 16:45:32.955689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.621 Running I/O for 15 seconds... 00:32:19.621 11501.00 IOPS, 44.93 MiB/s [2024-12-14T15:45:49.707Z] [2024-12-14 16:45:34.922911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.621 [2024-12-14 16:45:34.922943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.621 [2024-12-14 16:45:34.922953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.621 [2024-12-14 16:45:34.922961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.621 [2024-12-14 16:45:34.922969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.621 [2024-12-14 16:45:34.922976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.621 [2024-12-14 16:45:34.922984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.621 [2024-12-14 16:45:34.922992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.621 [2024-12-14 16:45:34.922999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839460 is same with the state(6) to be set 00:32:19.621 [2024-12-14 16:45:34.923785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.622 [2024-12-14 16:45:34.923802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.923826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.923843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.923858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.923874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.923889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.923908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.923923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.923939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.923955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.923970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.923986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.923995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.622 [2024-12-14 16:45:34.924427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.622 [2024-12-14 16:45:34.924435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.924773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.924988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.924995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.925003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.925010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.925018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.623 [2024-12-14 16:45:34.925024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.925032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.623 [2024-12-14 16:45:34.925039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.623 [2024-12-14 16:45:34.925048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.624 [2024-12-14 16:45:34.925651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.624 [2024-12-14 16:45:34.925660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:34.925668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:34.925674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:34.925682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:34.925688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:34.925696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:34.925703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:34.925712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:34.925718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:34.925726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:34.925733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:34.925752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.625 [2024-12-14 16:45:34.925758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.625 [2024-12-14 16:45:34.925765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102008 len:8 PRP1 0x0 PRP2 0x0 00:32:19.625 [2024-12-14 16:45:34.925771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:34.925817] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:19.625 [2024-12-14 16:45:34.925829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:19.625 [2024-12-14 16:45:34.928614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:19.625 [2024-12-14 16:45:34.928642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x839460 (9): Bad file descriptor 00:32:19.625 [2024-12-14 16:45:34.951710] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:19.625 11250.50 IOPS, 43.95 MiB/s [2024-12-14T15:45:49.711Z] 11269.33 IOPS, 44.02 MiB/s [2024-12-14T15:45:49.711Z] 11310.75 IOPS, 44.18 MiB/s [2024-12-14T15:45:49.711Z] [2024-12-14 16:45:38.603529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.625 [2024-12-14 16:45:38.603780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.603991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.603998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.604006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.625 [2024-12-14 16:45:38.604015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.625 [2024-12-14 16:45:38.604023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.626 [2024-12-14 16:45:38.604166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.626 [2024-12-14 16:45:38.604598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.626 [2024-12-14 16:45:38.604607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.604989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.604997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.627 [2024-12-14 16:45:38.605224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.627 [2024-12-14 16:45:38.605233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.628 [2024-12-14 16:45:38.605435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.628 [2024-12-14 16:45:38.605450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:38.605554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c730 is same with the state(6) to be set 00:32:19.628 [2024-12-14 16:45:38.605580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.628 [2024-12-14 16:45:38.605585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.628 [2024-12-14 16:45:38.605592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50392 len:8 PRP1 0x0 PRP2 0x0 00:32:19.628 [2024-12-14 16:45:38.605598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605642] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:19.628 [2024-12-14 16:45:38.605665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.628 [2024-12-14 16:45:38.605672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.628 [2024-12-14 16:45:38.605688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.628 [2024-12-14 16:45:38.605702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.628 [2024-12-14 16:45:38.605715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:38.605722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:19.628 [2024-12-14 16:45:38.608513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:19.628 [2024-12-14 16:45:38.608542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x839460 (9): Bad file descriptor 00:32:19.628 [2024-12-14 16:45:38.635449] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:19.628 11232.00 IOPS, 43.88 MiB/s [2024-12-14T15:45:49.714Z] 11274.00 IOPS, 44.04 MiB/s [2024-12-14T15:45:49.714Z] 11306.29 IOPS, 44.17 MiB/s [2024-12-14T15:45:49.714Z] 11298.12 IOPS, 44.13 MiB/s [2024-12-14T15:45:49.714Z] 11314.56 IOPS, 44.20 MiB/s [2024-12-14T15:45:49.714Z] [2024-12-14 16:45:43.045454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:43.045490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:43.045505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:43.045515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:43.045524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:43.045532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:43.045541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:43.045549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:43.045570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:43.045578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:43.045586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:43.045593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:43.045601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:43.045609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:43.045617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:43.045625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:43.045635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.628 [2024-12-14 16:45:43.045642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.628 [2024-12-14 16:45:43.045651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-12-14 16:45:43.045658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-12-14 16:45:43.045676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-12-14 16:45:43.045693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-12-14 16:45:43.045714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-12-14 16:45:43.045730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-12-14 16:45:43.045746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.629 [2024-12-14 16:45:43.045760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.045987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.045995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.629 [2024-12-14 16:45:43.046276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.629 [2024-12-14 16:45:43.046282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:19.630 [2024-12-14 16:45:43.046763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.630 [2024-12-14 16:45:43.046793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69688 len:8 PRP1 0x0 PRP2 0x0 00:32:19.630 [2024-12-14 16:45:43.046802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.630 [2024-12-14 16:45:43.046818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.630 [2024-12-14 16:45:43.046824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69696 len:8 PRP1 0x0 PRP2 0x0 00:32:19.630 [2024-12-14 16:45:43.046830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.630 [2024-12-14 16:45:43.046837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.630 [2024-12-14 16:45:43.046842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.630 [2024-12-14 16:45:43.046848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69704 len:8 PRP1 0x0 PRP2 0x0 00:32:19.630 [2024-12-14 16:45:43.046855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.046862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.046867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.046873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69712 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.046879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.046886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.046890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.046897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69720 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.046903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.046910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.046915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.046920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69728 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.046927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.046933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.046938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.046943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69736 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.046949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.046960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.046966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.046972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69744 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.046979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.046985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.046991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.046996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69752 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69760 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69768 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69776 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69784 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69792 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69800 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69808 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69816 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69824 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69832 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69840 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69848 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69856 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69864 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69872 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69880 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.631 [2024-12-14 16:45:43.047410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69888 len:8 PRP1 0x0 PRP2 0x0 00:32:19.631 [2024-12-14 16:45:43.047416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.631 [2024-12-14 16:45:43.047423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.631 [2024-12-14 16:45:43.047428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69896 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.047440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.047447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.047452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69904 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.047463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.047470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.047475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69912 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.047487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.047494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.047499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69920 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.047511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.047517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.047523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69928 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.047535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.047545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.047550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69936 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.047567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.047573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.047578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69944 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.047590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.047597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.047603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69952 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.047615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.047622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.047627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69960 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.047638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.047645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.047650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69968 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.047662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.047669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.047674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69976 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.047685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.047692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.047697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.047702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69984 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.058485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.058510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.058518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69992 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.058526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.058545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.058553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70000 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.058566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.058583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.058591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70008 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.058602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.058618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.058626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70016 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.058635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.058652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.058660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70024 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.058669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.058684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.058693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70032 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.058701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.058717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.058725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70040 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.058735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.058750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.058758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70048 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.058769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:19.632 [2024-12-14 16:45:43.058785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:19.632 [2024-12-14 16:45:43.058793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70056 len:8 PRP1 0x0 PRP2 0x0 00:32:19.632 [2024-12-14 16:45:43.058802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058852] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:19.632 [2024-12-14 16:45:43.058881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.632 [2024-12-14 16:45:43.058892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.632 [2024-12-14 16:45:43.058912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.632 [2024-12-14 16:45:43.058931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.632 [2024-12-14 16:45:43.058950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.632 [2024-12-14 16:45:43.058959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:19.632 [2024-12-14 16:45:43.059000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x839460 (9): Bad file descriptor 00:32:19.632 [2024-12-14 16:45:43.062738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:19.633 [2024-12-14 16:45:43.093095] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:19.633 11281.10 IOPS, 44.07 MiB/s [2024-12-14T15:45:49.719Z] 11291.36 IOPS, 44.11 MiB/s [2024-12-14T15:45:49.719Z] 11299.00 IOPS, 44.14 MiB/s [2024-12-14T15:45:49.719Z] 11316.08 IOPS, 44.20 MiB/s [2024-12-14T15:45:49.719Z] 11322.00 IOPS, 44.23 MiB/s 00:32:19.633 Latency(us) 00:32:19.633 [2024-12-14T15:45:49.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.633 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:19.633 Verification LBA range: start 0x0 length 0x4000 00:32:19.633 NVMe0n1 : 15.00 11319.46 44.22 286.12 0.00 11007.31 415.45 21970.16 00:32:19.633 [2024-12-14T15:45:49.719Z] =================================================================================================================== 00:32:19.633 [2024-12-14T15:45:49.719Z] Total : 11319.46 44.22 286.12 0.00 11007.31 415.45 21970.16 00:32:19.633 Received shutdown signal, test time was about 15.000000 seconds 00:32:19.633 00:32:19.633 Latency(us) 00:32:19.633 [2024-12-14T15:45:49.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.633 [2024-12-14T15:45:49.719Z] =================================================================================================================== 00:32:19.633 [2024-12-14T15:45:49.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1150304 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1150304 /var/tmp/bdevperf.sock 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1150304 ']' 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:19.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:19.633 [2024-12-14 16:45:49.504924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:19.633 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:19.633 [2024-12-14 16:45:49.693498] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:19.891 16:45:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:20.149 NVMe0n1 00:32:20.149 16:45:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:20.406 00:32:20.406 16:45:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:20.972 00:32:20.972 16:45:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:20.972 16:45:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:20.972 16:45:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:21.230 16:45:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:24.509 16:45:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:24.509 16:45:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:24.509 16:45:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1151177 00:32:24.509 16:45:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:24.509 16:45:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1151177 00:32:25.886 { 00:32:25.886 "results": [ 00:32:25.886 { 00:32:25.886 "job": "NVMe0n1", 00:32:25.886 "core_mask": "0x1", 00:32:25.886 "workload": "verify", 00:32:25.886 "status": "finished", 00:32:25.886 "verify_range": { 00:32:25.886 "start": 0, 00:32:25.886 "length": 16384 00:32:25.886 }, 00:32:25.886 "queue_depth": 128, 00:32:25.886 "io_size": 4096, 00:32:25.886 "runtime": 1.006254, 00:32:25.886 "iops": 11269.520419297713, 00:32:25.886 "mibps": 44.02156413788169, 00:32:25.886 "io_failed": 0, 00:32:25.886 "io_timeout": 0, 00:32:25.886 "avg_latency_us": 11314.562053245989, 00:32:25.886 "min_latency_us": 1217.097142857143, 00:32:25.886 "max_latency_us": 14542.750476190477 00:32:25.886 } 00:32:25.886 ], 00:32:25.886 "core_count": 1 00:32:25.886 } 00:32:25.886 16:45:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:25.886 [2024-12-14 16:45:49.149976] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:25.886 [2024-12-14 16:45:49.150026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150304 ] 00:32:25.886 [2024-12-14 16:45:49.226694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.886 [2024-12-14 16:45:49.246404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.886 [2024-12-14 16:45:51.211329] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:25.886 [2024-12-14 16:45:51.211377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.886 [2024-12-14 16:45:51.211389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.886 [2024-12-14 16:45:51.211398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.886 [2024-12-14 16:45:51.211405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.886 [2024-12-14 16:45:51.211412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.886 [2024-12-14 16:45:51.211419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.886 [2024-12-14 16:45:51.211426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.886 [2024-12-14 16:45:51.211433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.886 [2024-12-14 16:45:51.211444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:25.886 [2024-12-14 16:45:51.211469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:25.886 [2024-12-14 16:45:51.211483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf37460 (9): Bad file descriptor 00:32:25.886 [2024-12-14 16:45:51.216853] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:25.886 Running I/O for 1 seconds... 00:32:25.886 11212.00 IOPS, 43.80 MiB/s 00:32:25.886 Latency(us) 00:32:25.886 [2024-12-14T15:45:55.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.886 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:25.886 Verification LBA range: start 0x0 length 0x4000 00:32:25.886 NVMe0n1 : 1.01 11269.52 44.02 0.00 0.00 11314.56 1217.10 14542.75 00:32:25.886 [2024-12-14T15:45:55.972Z] =================================================================================================================== 00:32:25.886 [2024-12-14T15:45:55.972Z] Total : 11269.52 44.02 0.00 0.00 11314.56 1217.10 14542.75 00:32:25.886 16:45:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:25.886 16:45:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:25.886 16:45:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:26.144 16:45:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:26.144 16:45:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:26.144 16:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:26.402 16:45:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1150304 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1150304 ']' 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1150304 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1150304 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1150304' 00:32:29.678 killing process with pid 1150304 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1150304 00:32:29.678 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1150304 00:32:29.936 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:29.936 16:45:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:29.936 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:29.936 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:29.937 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:29.937 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.937 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:29.937 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.937 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:29.937 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.937 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:30.195 rmmod nvme_tcp 00:32:30.195 rmmod nvme_fabrics 00:32:30.195 rmmod nvme_keyring 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1147382 ']' 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1147382 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1147382 ']' 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1147382 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1147382 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1147382' 00:32:30.195 killing process with pid 1147382 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1147382 00:32:30.195 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1147382 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.454 16:46:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.359 16:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:32.359 00:32:32.359 real 0m37.348s 00:32:32.359 user 1m58.481s 00:32:32.359 sys 0m7.854s 00:32:32.359 16:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.359 16:46:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:32.359 ************************************ 00:32:32.359 END TEST nvmf_failover 00:32:32.359 ************************************ 00:32:32.359 16:46:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:32.359 16:46:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:32.359 16:46:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.359 16:46:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.619 ************************************ 00:32:32.619 START TEST nvmf_host_discovery 00:32:32.619 ************************************ 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:32.619 * Looking for test storage... 00:32:32.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:32.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.619 --rc genhtml_branch_coverage=1 00:32:32.619 --rc genhtml_function_coverage=1 00:32:32.619 --rc genhtml_legend=1 00:32:32.619 --rc geninfo_all_blocks=1 00:32:32.619 --rc geninfo_unexecuted_blocks=1 00:32:32.619 00:32:32.619 ' 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:32.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.619 --rc genhtml_branch_coverage=1 00:32:32.619 --rc genhtml_function_coverage=1 00:32:32.619 --rc genhtml_legend=1 00:32:32.619 --rc geninfo_all_blocks=1 00:32:32.619 --rc geninfo_unexecuted_blocks=1 00:32:32.619 00:32:32.619 ' 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:32.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.619 --rc genhtml_branch_coverage=1 00:32:32.619 --rc genhtml_function_coverage=1 00:32:32.619 --rc genhtml_legend=1 00:32:32.619 --rc geninfo_all_blocks=1 00:32:32.619 --rc geninfo_unexecuted_blocks=1 00:32:32.619 00:32:32.619 ' 00:32:32.619 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:32.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.619 --rc genhtml_branch_coverage=1 00:32:32.619 --rc genhtml_function_coverage=1 00:32:32.619 --rc genhtml_legend=1 00:32:32.619 --rc geninfo_all_blocks=1 00:32:32.619 --rc geninfo_unexecuted_blocks=1 00:32:32.619 00:32:32.619 ' 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:32.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:32.620 16:46:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:39.308 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:39.308 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:39.308 Found net devices under 0000:af:00.0: cvl_0_0 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:39.308 Found net devices under 0000:af:00.1: cvl_0_1 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:39.308 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:39.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:32:39.309 00:32:39.309 --- 10.0.0.2 ping statistics --- 00:32:39.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.309 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:39.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:32:39.309 00:32:39.309 --- 10.0.0.1 ping statistics --- 00:32:39.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.309 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1155420 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1155420 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1155420 ']' 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 [2024-12-14 16:46:08.568139] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:39.309 [2024-12-14 16:46:08.568186] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.309 [2024-12-14 16:46:08.644456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.309 [2024-12-14 16:46:08.665645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.309 [2024-12-14 16:46:08.665683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.309 [2024-12-14 16:46:08.665690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.309 [2024-12-14 16:46:08.665696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.309 [2024-12-14 16:46:08.665701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.309 [2024-12-14 16:46:08.666213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 [2024-12-14 16:46:08.797223] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 [2024-12-14 16:46:08.809392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 null0 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 null1 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1155588 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1155588 /tmp/host.sock 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1155588 ']' 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:39.309 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.309 16:46:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 [2024-12-14 16:46:08.884150] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:39.309 [2024-12-14 16:46:08.884191] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155588 ] 00:32:39.309 [2024-12-14 16:46:08.956547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.309 [2024-12-14 16:46:08.978607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.309 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.310 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.569 [2024-12-14 16:46:09.398923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:39.569 16:46:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:40.137 [2024-12-14 16:46:10.094940] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:40.137 [2024-12-14 16:46:10.094963] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:40.137 [2024-12-14 16:46:10.094976] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:40.137 [2024-12-14 16:46:10.181218] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:40.395 [2024-12-14 16:46:10.397199] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:40.395 [2024-12-14 16:46:10.397969] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5e8c60:1 started. 00:32:40.395 [2024-12-14 16:46:10.399336] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:40.395 [2024-12-14 16:46:10.399352] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:40.395 [2024-12-14 16:46:10.404040] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5e8c60 was disconnected and freed. delete nvme_qpair. 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:40.654 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.913 [2024-12-14 16:46:10.799694] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5e8fe0:1 started. 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.913 [2024-12-14 16:46:10.804883] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5e8fe0 was disconnected and freed. delete nvme_qpair. 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.913 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.914 [2024-12-14 16:46:10.898924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:40.914 [2024-12-14 16:46:10.899390] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:40.914 [2024-12-14 16:46:10.899409] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:40.914 16:46:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.173 [2024-12-14 16:46:11.027803] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:41.173 16:46:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:41.173 [2024-12-14 16:46:11.090335] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:41.173 [2024-12-14 16:46:11.090367] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:41.173 [2024-12-14 16:46:11.090375] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:41.173 [2024-12-14 16:46:11.090380] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.109 [2024-12-14 16:46:12.155390] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:42.109 [2024-12-14 16:46:12.155411] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:42.109 [2024-12-14 16:46:12.161614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.109 [2024-12-14 16:46:12.161632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.109 [2024-12-14 16:46:12.161641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.109 [2024-12-14 16:46:12.161652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.109 [2024-12-14 16:46:12.161660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.109 [2024-12-14 16:46:12.161667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.109 [2024-12-14 16:46:12.161674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.109 [2024-12-14 16:46:12.161681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.109 [2024-12-14 16:46:12.161688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bad70 is same with the state(6) to be set 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:42.109 [2024-12-14 16:46:12.171628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bad70 (9): Bad file descriptor 00:32:42.109 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.109 [2024-12-14 16:46:12.181663] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:42.109 [2024-12-14 16:46:12.181674] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:42.109 [2024-12-14 16:46:12.181680] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:42.109 [2024-12-14 16:46:12.181685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:42.109 [2024-12-14 16:46:12.181700] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:42.109 [2024-12-14 16:46:12.181960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.109 [2024-12-14 16:46:12.181974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bad70 with addr=10.0.0.2, port=4420 00:32:42.109 [2024-12-14 16:46:12.181982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bad70 is same with the state(6) to be set 00:32:42.109 [2024-12-14 16:46:12.181993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bad70 (9): Bad file descriptor 00:32:42.109 [2024-12-14 16:46:12.182004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:42.109 [2024-12-14 16:46:12.182010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:42.109 [2024-12-14 16:46:12.182018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:42.109 [2024-12-14 16:46:12.182024] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:42.110 [2024-12-14 16:46:12.182029] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:42.110 [2024-12-14 16:46:12.182033] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:42.110 [2024-12-14 16:46:12.191730] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:42.110 [2024-12-14 16:46:12.191744] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:42.110 [2024-12-14 16:46:12.191748] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:42.110 [2024-12-14 16:46:12.191752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:42.110 [2024-12-14 16:46:12.191765] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:42.110 [2024-12-14 16:46:12.191937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.110 [2024-12-14 16:46:12.191949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bad70 with addr=10.0.0.2, port=4420 00:32:42.110 [2024-12-14 16:46:12.191956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bad70 is same with the state(6) to be set 00:32:42.110 [2024-12-14 16:46:12.191966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bad70 (9): Bad file descriptor 00:32:42.110 [2024-12-14 16:46:12.191976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:42.110 [2024-12-14 16:46:12.191982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:42.110 [2024-12-14 16:46:12.191989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:42.110 [2024-12-14 16:46:12.191994] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:42.110 [2024-12-14 16:46:12.191999] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:42.110 [2024-12-14 16:46:12.192002] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:42.369 [2024-12-14 16:46:12.201795] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:42.369 [2024-12-14 16:46:12.201818] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:42.369 [2024-12-14 16:46:12.201822] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:42.369 [2024-12-14 16:46:12.201826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:42.369 [2024-12-14 16:46:12.201839] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:42.369 [2024-12-14 16:46:12.202106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.369 [2024-12-14 16:46:12.202119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bad70 with addr=10.0.0.2, port=4420 00:32:42.369 [2024-12-14 16:46:12.202126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bad70 is same with the state(6) to be set 00:32:42.369 [2024-12-14 16:46:12.202137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bad70 (9): Bad file descriptor 00:32:42.369 [2024-12-14 16:46:12.202146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:42.369 [2024-12-14 16:46:12.202152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:42.369 [2024-12-14 16:46:12.202159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:42.369 [2024-12-14 16:46:12.202164] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:42.369 [2024-12-14 16:46:12.202169] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:42.369 [2024-12-14 16:46:12.202173] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:42.369 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.369 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:42.369 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:42.369 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:42.369 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:42.369 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:42.370 [2024-12-14 16:46:12.211870] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:42.370 [2024-12-14 16:46:12.211882] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:42.370 [2024-12-14 16:46:12.211886] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:42.370 [2024-12-14 16:46:12.211890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:42.370 [2024-12-14 16:46:12.211903] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:42.370 [2024-12-14 16:46:12.212067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.370 [2024-12-14 16:46:12.212078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bad70 with addr=10.0.0.2, port=4420 00:32:42.370 [2024-12-14 16:46:12.212085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bad70 is same with the state(6) to be set 00:32:42.370 [2024-12-14 16:46:12.212094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bad70 (9): Bad file descriptor 00:32:42.370 [2024-12-14 16:46:12.212104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:42.370 [2024-12-14 16:46:12.212110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:42.370 [2024-12-14 16:46:12.212117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:42.370 [2024-12-14 16:46:12.212122] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:42.370 [2024-12-14 16:46:12.212126] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:42.370 [2024-12-14 16:46:12.212130] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:42.370 [2024-12-14 16:46:12.221933] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:42.370 [2024-12-14 16:46:12.221947] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:42.370 [2024-12-14 16:46:12.221951] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:42.370 [2024-12-14 16:46:12.221958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:42.370 [2024-12-14 16:46:12.221972] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:42.370 [2024-12-14 16:46:12.222142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.370 [2024-12-14 16:46:12.222153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bad70 with addr=10.0.0.2, port=4420 00:32:42.370 [2024-12-14 16:46:12.222161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bad70 is same with the state(6) to be set 00:32:42.370 [2024-12-14 16:46:12.222171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bad70 (9): Bad file descriptor 00:32:42.370 [2024-12-14 16:46:12.222186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:42.370 [2024-12-14 16:46:12.222193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:42.370 [2024-12-14 16:46:12.222200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:42.370 [2024-12-14 16:46:12.222205] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:42.370 [2024-12-14 16:46:12.222210] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:42.370 [2024-12-14 16:46:12.222214] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:42.370 [2024-12-14 16:46:12.232002] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:42.370 [2024-12-14 16:46:12.232012] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:42.370 [2024-12-14 16:46:12.232016] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:42.370 [2024-12-14 16:46:12.232020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:42.370 [2024-12-14 16:46:12.232031] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:42.370 [2024-12-14 16:46:12.232253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.370 [2024-12-14 16:46:12.232264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bad70 with addr=10.0.0.2, port=4420 00:32:42.370 [2024-12-14 16:46:12.232272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bad70 is same with the state(6) to be set 00:32:42.370 [2024-12-14 16:46:12.232281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bad70 (9): Bad file descriptor 00:32:42.370 [2024-12-14 16:46:12.232297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:42.370 [2024-12-14 16:46:12.232304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:42.370 [2024-12-14 16:46:12.232310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:42.370 [2024-12-14 16:46:12.232315] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:42.370 [2024-12-14 16:46:12.232320] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:42.370 [2024-12-14 16:46:12.232323] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:42.370 [2024-12-14 16:46:12.242061] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:42.370 [2024-12-14 16:46:12.242071] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:42.370 [2024-12-14 16:46:12.242078] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:42.370 [2024-12-14 16:46:12.242081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:42.370 [2024-12-14 16:46:12.242093] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:42.370 [2024-12-14 16:46:12.242306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.370 [2024-12-14 16:46:12.242318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bad70 with addr=10.0.0.2, port=4420 00:32:42.370 [2024-12-14 16:46:12.242325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bad70 is same with the state(6) to be set 00:32:42.370 [2024-12-14 16:46:12.242335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bad70 (9): Bad file descriptor 00:32:42.370 [2024-12-14 16:46:12.242360] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:42.370 [2024-12-14 16:46:12.242373] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:42.370 [2024-12-14 16:46:12.242391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:42.370 [2024-12-14 16:46:12.242398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:42.370 [2024-12-14 16:46:12.242404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:42.370 [2024-12-14 16:46:12.242410] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:42.370 [2024-12-14 16:46:12.242414] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:42.370 [2024-12-14 16:46:12.242418] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:42.370 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:42.371 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.630 16:46:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.565 [2024-12-14 16:46:13.553618] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:43.565 [2024-12-14 16:46:13.553640] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:43.565 [2024-12-14 16:46:13.553651] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:43.565 [2024-12-14 16:46:13.641898] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:43.824 [2024-12-14 16:46:13.707454] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:43.824 [2024-12-14 16:46:13.708030] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x5f4be0:1 started. 00:32:43.824 [2024-12-14 16:46:13.709502] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:43.824 [2024-12-14 16:46:13.709525] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:43.824 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.824 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:43.824 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:43.824 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:43.824 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:43.824 [2024-12-14 16:46:13.712715] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x5f4be0 was disconnected and freed. delete nvme_qpair. 00:32:43.824 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:43.824 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:43.824 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:43.824 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:43.824 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.824 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.824 request: 00:32:43.824 { 00:32:43.824 "name": "nvme", 00:32:43.824 "trtype": "tcp", 00:32:43.824 "traddr": "10.0.0.2", 00:32:43.824 "adrfam": "ipv4", 00:32:43.824 "trsvcid": "8009", 00:32:43.824 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:43.824 "wait_for_attach": true, 00:32:43.824 "method": "bdev_nvme_start_discovery", 00:32:43.825 "req_id": 1 00:32:43.825 } 00:32:43.825 Got JSON-RPC error response 00:32:43.825 response: 00:32:43.825 { 00:32:43.825 "code": -17, 00:32:43.825 "message": "File exists" 00:32:43.825 } 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.825 request: 00:32:43.825 { 00:32:43.825 "name": "nvme_second", 00:32:43.825 "trtype": "tcp", 00:32:43.825 "traddr": "10.0.0.2", 00:32:43.825 "adrfam": "ipv4", 00:32:43.825 "trsvcid": "8009", 00:32:43.825 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:43.825 "wait_for_attach": true, 00:32:43.825 "method": "bdev_nvme_start_discovery", 00:32:43.825 "req_id": 1 00:32:43.825 } 00:32:43.825 Got JSON-RPC error response 00:32:43.825 response: 00:32:43.825 { 00:32:43.825 "code": -17, 00:32:43.825 "message": "File exists" 00:32:43.825 } 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.825 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.084 16:46:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.019 [2024-12-14 16:46:14.948936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.019 [2024-12-14 16:46:14.948962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ba730 with addr=10.0.0.2, port=8010 00:32:45.019 [2024-12-14 16:46:14.948973] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:45.019 [2024-12-14 16:46:14.948979] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:45.019 [2024-12-14 16:46:14.948985] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:45.954 [2024-12-14 16:46:15.951293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.954 [2024-12-14 16:46:15.951317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ba730 with addr=10.0.0.2, port=8010 00:32:45.954 [2024-12-14 16:46:15.951327] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:45.954 [2024-12-14 16:46:15.951333] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:45.954 [2024-12-14 16:46:15.951339] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:46.889 [2024-12-14 16:46:16.953532] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:46.889 request: 00:32:46.889 { 00:32:46.889 "name": "nvme_second", 00:32:46.889 "trtype": "tcp", 00:32:46.889 "traddr": "10.0.0.2", 00:32:46.889 "adrfam": "ipv4", 00:32:46.889 "trsvcid": "8010", 00:32:46.889 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:46.889 "wait_for_attach": false, 00:32:46.889 "attach_timeout_ms": 3000, 00:32:46.889 "method": "bdev_nvme_start_discovery", 00:32:46.889 "req_id": 1 00:32:46.889 } 00:32:46.889 Got JSON-RPC error response 00:32:46.889 response: 00:32:46.889 { 00:32:46.889 "code": -110, 00:32:46.889 "message": "Connection timed out" 00:32:46.889 } 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:46.889 16:46:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1155588 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.149 rmmod nvme_tcp 00:32:47.149 rmmod nvme_fabrics 00:32:47.149 rmmod nvme_keyring 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1155420 ']' 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1155420 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1155420 ']' 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1155420 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1155420 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1155420' 00:32:47.149 killing process with pid 1155420 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1155420 00:32:47.149 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1155420 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.408 16:46:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.313 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.313 00:32:49.313 real 0m16.888s 00:32:49.313 user 0m20.067s 00:32:49.313 sys 0m5.734s 00:32:49.313 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.313 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.313 ************************************ 00:32:49.313 END TEST nvmf_host_discovery 00:32:49.313 ************************************ 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.573 ************************************ 00:32:49.573 START TEST nvmf_host_multipath_status 00:32:49.573 ************************************ 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:49.573 * Looking for test storage... 00:32:49.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:49.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.573 --rc genhtml_branch_coverage=1 00:32:49.573 --rc genhtml_function_coverage=1 00:32:49.573 --rc genhtml_legend=1 00:32:49.573 --rc geninfo_all_blocks=1 00:32:49.573 --rc geninfo_unexecuted_blocks=1 00:32:49.573 00:32:49.573 ' 00:32:49.573 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:49.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.574 --rc genhtml_branch_coverage=1 00:32:49.574 --rc genhtml_function_coverage=1 00:32:49.574 --rc genhtml_legend=1 00:32:49.574 --rc geninfo_all_blocks=1 00:32:49.574 --rc geninfo_unexecuted_blocks=1 00:32:49.574 00:32:49.574 ' 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:49.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.574 --rc genhtml_branch_coverage=1 00:32:49.574 --rc genhtml_function_coverage=1 00:32:49.574 --rc genhtml_legend=1 00:32:49.574 --rc geninfo_all_blocks=1 00:32:49.574 --rc geninfo_unexecuted_blocks=1 00:32:49.574 00:32:49.574 ' 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:49.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.574 --rc genhtml_branch_coverage=1 00:32:49.574 --rc genhtml_function_coverage=1 00:32:49.574 --rc genhtml_legend=1 00:32:49.574 --rc geninfo_all_blocks=1 00:32:49.574 --rc geninfo_unexecuted_blocks=1 00:32:49.574 00:32:49.574 ' 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:49.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.574 16:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:56.142 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:56.142 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.142 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:56.142 Found net devices under 0000:af:00.0: cvl_0_0 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:56.143 Found net devices under 0000:af:00.1: cvl_0_1 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:56.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:56.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:32:56.143 00:32:56.143 --- 10.0.0.2 ping statistics --- 00:32:56.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.143 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:56.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:56.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:32:56.143 00:32:56.143 --- 10.0.0.1 ping statistics --- 00:32:56.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.143 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1160356 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1160356 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1160356 ']' 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:56.143 [2024-12-14 16:46:25.595241] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:56.143 [2024-12-14 16:46:25.595287] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.143 [2024-12-14 16:46:25.675017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:56.143 [2024-12-14 16:46:25.696682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:56.143 [2024-12-14 16:46:25.696720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:56.143 [2024-12-14 16:46:25.696727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:56.143 [2024-12-14 16:46:25.696733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:56.143 [2024-12-14 16:46:25.696738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:56.143 [2024-12-14 16:46:25.697858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.143 [2024-12-14 16:46:25.697860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1160356 00:32:56.143 16:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:56.143 [2024-12-14 16:46:26.001268] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.143 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:56.143 Malloc0 00:32:56.402 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:56.402 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:56.660 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.919 [2024-12-14 16:46:26.777062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.919 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:56.919 [2024-12-14 16:46:26.977609] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:56.919 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1160618 00:32:56.919 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:56.919 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:56.919 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1160618 /var/tmp/bdevperf.sock 00:32:56.919 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1160618 ']' 00:32:56.919 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:56.919 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.919 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:56.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:56.919 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.919 16:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:57.178 16:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.178 16:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:57.178 16:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:57.436 16:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:58.002 Nvme0n1 00:32:58.002 16:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:58.260 Nvme0n1 00:32:58.260 16:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:58.260 16:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:00.795 16:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:00.795 16:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:00.795 16:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:00.795 16:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:01.728 16:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:01.728 16:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:01.728 16:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.728 16:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:01.986 16:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.986 16:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:01.986 16:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.986 16:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:02.245 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:02.245 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:02.245 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.245 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.503 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.503 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.503 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.503 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:02.503 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.503 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:02.503 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.503 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:02.761 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.761 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:02.761 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:02.761 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.020 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.020 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:03.020 16:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:03.278 16:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:03.278 16:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:04.653 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:04.653 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:04.653 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.653 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:04.653 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:04.653 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:04.653 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.653 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:04.911 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.911 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:04.911 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.911 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:04.911 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.911 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:04.911 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.911 16:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:05.169 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.169 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:05.169 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.169 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:05.437 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.437 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:05.437 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.437 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:05.698 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.698 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:05.698 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:05.956 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:05.956 16:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:07.330 16:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:07.330 16:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:07.330 16:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.330 16:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.330 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.330 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:07.330 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.330 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.330 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.330 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.331 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.331 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:07.588 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.588 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:07.588 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.588 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:07.847 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.847 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:07.847 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.847 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:08.104 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.104 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:08.104 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.104 16:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.362 16:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.362 16:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:08.362 16:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:08.362 16:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:08.620 16:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:09.553 16:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:09.553 16:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:09.811 16:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.811 16:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:09.811 16:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.811 16:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:09.811 16:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.811 16:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.070 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.070 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.070 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.070 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:10.328 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.328 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:10.328 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.328 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:10.586 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.587 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:10.587 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.587 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:10.845 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.845 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:10.845 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.845 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:10.845 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.845 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:10.845 16:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:11.103 16:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:11.361 16:46:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:12.295 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:12.295 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:12.296 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.296 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:12.554 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.554 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:12.554 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.554 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:12.813 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.813 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:12.813 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.813 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:13.071 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.071 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:13.071 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.071 16:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:13.071 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.071 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:13.071 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:13.071 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.329 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.329 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:13.329 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.329 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:13.588 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.588 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:13.588 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:13.846 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:13.846 16:46:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:15.226 16:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:15.226 16:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:15.226 16:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.226 16:46:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:15.226 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:15.226 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:15.226 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:15.226 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.483 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.483 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:15.483 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.483 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:15.483 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.483 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:15.483 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.483 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:15.741 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.741 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:15.741 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.741 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:15.999 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:15.999 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:15.999 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.999 16:46:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:16.259 16:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.259 16:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:16.517 16:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:16.518 16:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:16.776 16:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:16.776 16:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:18.152 16:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:18.152 16:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:18.152 16:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.152 16:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:18.152 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.152 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:18.152 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.152 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:18.411 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.411 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:18.411 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.411 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:18.670 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.670 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:18.670 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.670 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:18.670 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.670 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:18.670 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.670 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:18.928 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.928 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:18.928 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.929 16:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:19.187 16:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.187 16:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:19.187 16:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:19.446 16:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:19.704 16:46:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:20.641 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:20.641 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:20.641 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.641 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:20.900 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:20.900 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:20.900 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.900 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:20.900 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.900 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:20.900 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.900 16:46:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:21.158 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.158 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:21.158 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.158 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:21.417 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.417 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:21.417 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.417 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:21.676 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.676 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:21.676 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.676 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:21.935 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.935 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:21.935 16:46:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:21.935 16:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:22.194 16:46:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:23.187 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:23.187 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:23.187 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.187 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:23.491 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.491 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:23.491 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.491 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:23.776 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.776 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:23.776 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:23.776 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.035 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.035 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:24.035 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.035 16:46:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:24.035 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.035 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:24.035 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.035 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:24.294 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.294 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:24.294 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.294 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.553 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.553 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:24.553 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:24.811 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:25.070 16:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:26.004 16:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:26.004 16:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:26.004 16:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.004 16:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:26.262 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.262 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:26.262 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.262 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:26.262 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:26.262 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:26.262 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.262 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:26.521 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.521 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:26.521 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.521 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:26.780 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.780 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:26.780 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.780 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:27.038 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.038 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:27.038 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:27.038 16:46:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1160618 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1160618 ']' 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1160618 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1160618 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1160618' 00:33:27.297 killing process with pid 1160618 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1160618 00:33:27.297 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1160618 00:33:27.297 { 00:33:27.297 "results": [ 00:33:27.297 { 00:33:27.297 "job": "Nvme0n1", 00:33:27.297 "core_mask": "0x4", 00:33:27.297 "workload": "verify", 00:33:27.297 "status": "terminated", 00:33:27.297 "verify_range": { 00:33:27.297 "start": 0, 00:33:27.297 "length": 16384 00:33:27.297 }, 00:33:27.297 "queue_depth": 128, 00:33:27.297 "io_size": 4096, 00:33:27.297 "runtime": 28.780255, 00:33:27.297 "iops": 10713.247676228026, 00:33:27.297 "mibps": 41.848623735265726, 00:33:27.297 "io_failed": 0, 00:33:27.297 "io_timeout": 0, 00:33:27.297 "avg_latency_us": 11928.219232961592, 00:33:27.297 "min_latency_us": 152.13714285714286, 00:33:27.297 "max_latency_us": 3019898.88 00:33:27.297 } 00:33:27.297 ], 00:33:27.297 "core_count": 1 00:33:27.297 } 00:33:27.572 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1160618 00:33:27.572 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:27.572 [2024-12-14 16:46:27.036960] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:27.572 [2024-12-14 16:46:27.037018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160618 ] 00:33:27.572 [2024-12-14 16:46:27.110544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.572 [2024-12-14 16:46:27.132600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:27.572 Running I/O for 90 seconds... 00:33:27.572 11412.00 IOPS, 44.58 MiB/s [2024-12-14T15:46:57.658Z] 11448.00 IOPS, 44.72 MiB/s [2024-12-14T15:46:57.658Z] 11513.67 IOPS, 44.98 MiB/s [2024-12-14T15:46:57.658Z] 11582.25 IOPS, 45.24 MiB/s [2024-12-14T15:46:57.658Z] 11586.00 IOPS, 45.26 MiB/s [2024-12-14T15:46:57.658Z] 11579.83 IOPS, 45.23 MiB/s [2024-12-14T15:46:57.658Z] 11565.57 IOPS, 45.18 MiB/s [2024-12-14T15:46:57.658Z] 11546.75 IOPS, 45.10 MiB/s [2024-12-14T15:46:57.658Z] 11558.44 IOPS, 45.15 MiB/s [2024-12-14T15:46:57.658Z] 11562.80 IOPS, 45.17 MiB/s [2024-12-14T15:46:57.658Z] 11566.27 IOPS, 45.18 MiB/s [2024-12-14T15:46:57.658Z] 11563.42 IOPS, 45.17 MiB/s [2024-12-14T15:46:57.658Z] [2024-12-14 16:46:41.067703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.067740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.067789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.067797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.067810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.067817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.067830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.067837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.067849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.067856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.067869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.067875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.067888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.067895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.067908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.067915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.068417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.068433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.068447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.068463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.068477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.068483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.068497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.068504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.068517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.068524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.068536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.572 [2024-12-14 16:46:41.068543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:27.572 [2024-12-14 16:46:41.068562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.068982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.068995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.573 [2024-12-14 16:46:41.069312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:27.573 [2024-12-14 16:46:41.069325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.069977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.069984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:27.574 [2024-12-14 16:46:41.070288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.574 [2024-12-14 16:46:41.070295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:41.070379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:41.070404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:41.070603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:41.070626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:41.070650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:41.070673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:41.070697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:41.070722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:41.070746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:41.070769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:41.070793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.070977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.070984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.071001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.071009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.071026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.071033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.071049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.071056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.071073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.071079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.071096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.071102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.071119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.071126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.071143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.071150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:41.071167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.575 [2024-12-14 16:46:41.071173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:27.575 11252.54 IOPS, 43.96 MiB/s [2024-12-14T15:46:57.661Z] 10448.79 IOPS, 40.82 MiB/s [2024-12-14T15:46:57.661Z] 9752.20 IOPS, 38.09 MiB/s [2024-12-14T15:46:57.661Z] 9394.31 IOPS, 36.70 MiB/s [2024-12-14T15:46:57.661Z] 9524.94 IOPS, 37.21 MiB/s [2024-12-14T15:46:57.661Z] 9642.22 IOPS, 37.66 MiB/s [2024-12-14T15:46:57.661Z] 9840.95 IOPS, 38.44 MiB/s [2024-12-14T15:46:57.661Z] 10020.45 IOPS, 39.14 MiB/s [2024-12-14T15:46:57.661Z] 10187.00 IOPS, 39.79 MiB/s [2024-12-14T15:46:57.661Z] 10240.27 IOPS, 40.00 MiB/s [2024-12-14T15:46:57.661Z] 10293.91 IOPS, 40.21 MiB/s [2024-12-14T15:46:57.661Z] 10357.25 IOPS, 40.46 MiB/s [2024-12-14T15:46:57.661Z] 10477.72 IOPS, 40.93 MiB/s [2024-12-14T15:46:57.661Z] 10591.27 IOPS, 41.37 MiB/s [2024-12-14T15:46:57.661Z] [2024-12-14 16:46:54.884859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:54.884900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:54.884921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:54.884929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:54.884942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.575 [2024-12-14 16:46:54.884949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:27.575 [2024-12-14 16:46:54.884966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.884973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.884985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.884992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.576 [2024-12-14 16:46:54.885506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.576 [2024-12-14 16:46:54.885525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.576 [2024-12-14 16:46:54.885544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.576 [2024-12-14 16:46:54.885569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.576 [2024-12-14 16:46:54.885587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:27.576 [2024-12-14 16:46:54.885600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.885606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.885619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.885626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.886052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.886066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.886082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.886089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.886104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.886111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.886124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.886130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.886142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.886149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.886161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.886168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.886180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.886186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.886198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.886205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.886217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.886224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.886236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.886243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.577 [2024-12-14 16:46:54.887878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:27.577 [2024-12-14 16:46:54.887909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.577 [2024-12-14 16:46:54.887915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.887927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.887936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.887948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.887955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.887967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.887974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.887986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.887994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.888012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.888030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.888051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.888069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.888088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.888754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.888776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.888795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.888814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.888836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.888855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.888874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.888893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.888911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.888932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.888951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.888969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.888981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.888988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.889000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.889007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.889019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.889026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.889038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.889045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.889058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.889065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.889077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.889083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.889095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.578 [2024-12-14 16:46:54.889102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.889114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.889121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.889133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.889140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.890197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.890214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.890230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.890237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.890250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.890257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.890269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.890276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.890288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.890297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.890310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.890316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.890328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.890335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.890347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.890357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.890369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.578 [2024-12-14 16:46:54.890376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:27.578 [2024-12-14 16:46:54.890388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.890395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.890415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.890433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.890673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.890692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.890711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.890730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.890749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.890988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.890998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.891018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.891038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.891056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.891079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.891098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.891117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.891136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.891155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.891174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.891193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.891213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.891232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.891250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.891269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.579 [2024-12-14 16:46:54.891288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.891309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.891327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.891346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.891365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.579 [2024-12-14 16:46:54.891383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:27.579 [2024-12-14 16:46:54.891395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.891402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.891414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.891421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.891434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.891440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.891452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.891459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.891472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.891479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.891492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.891498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.892149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.892176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.892196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.892220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.892240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.892260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.892279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.892299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.892318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.892338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.892358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.892378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.892398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.892410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.892419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.893737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.893759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.893779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.893799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.893818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.893839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.893859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.893878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.893898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.893917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.893937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.893960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.580 [2024-12-14 16:46:54.893979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.893992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.893999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.894012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.894019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.894031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.894038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.894050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.894058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.894070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.894077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.894090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.894096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.894109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.580 [2024-12-14 16:46:54.894116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:27.580 [2024-12-14 16:46:54.894128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.581 [2024-12-14 16:46:54.894135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.894148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.581 [2024-12-14 16:46:54.894155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.894167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.894174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.894187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.894194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.894208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.894215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.894228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.581 [2024-12-14 16:46:54.894234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.894247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.581 [2024-12-14 16:46:54.894253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.894266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.581 [2024-12-14 16:46:54.894273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.894286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.894293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.895799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.895816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.895831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.895838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.895851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.895860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.895873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.895880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.895892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.895899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.895912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.895919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.895931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.895938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.895953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.895960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.895972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.895979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.895991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.895998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.896017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.896036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.896056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.896075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.896094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.896113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.896132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.581 [2024-12-14 16:46:54.896308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.896331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.581 [2024-12-14 16:46:54.896353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.581 [2024-12-14 16:46:54.896373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.581 [2024-12-14 16:46:54.896393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.896413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.581 [2024-12-14 16:46:54.896432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.581 [2024-12-14 16:46:54.896451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.581 [2024-12-14 16:46:54.896471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:27.581 [2024-12-14 16:46:54.896483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.896490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.896502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.896509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.896522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.896529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.896541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.896548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.896566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.896574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.896587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.896596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.896742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.896753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.896767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.896774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.896787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.896796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.896808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.896815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.896829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.896837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.896849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.896857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.905469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.905490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.905509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.905528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.905546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.905569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.905590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.905610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.905628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.905647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.905666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.905685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.905697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.905705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.907209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.907232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.907251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.907270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.907288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.907310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.907329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.907347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.907366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.907385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.582 [2024-12-14 16:46:54.907403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.907422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.907440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.907460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:27.582 [2024-12-14 16:46:54.907472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.582 [2024-12-14 16:46:54.907479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.908633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.908654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.908676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.908695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.908714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.908976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.908988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.908995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.909050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.909069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.909087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.909106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.909255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.583 [2024-12-14 16:46:54.909328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.909347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:27.583 [2024-12-14 16:46:54.909360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.583 [2024-12-14 16:46:54.909367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.909379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.909386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.911655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.911684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.911710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.911736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.911762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.911787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.911813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.911839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.911864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.911890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.911921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.911947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.911972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.911988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.911997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.912023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.912151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.912176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.912379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.912404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.912446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.912455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.913101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.913119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.913138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.913147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.913164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.913173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.913193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.913202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.913218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.913227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.913244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.584 [2024-12-14 16:46:54.913253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:27.584 [2024-12-14 16:46:54.913269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.584 [2024-12-14 16:46:54.913278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.913329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.913354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.913379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.913660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.913669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.914995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.915285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.915310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.915337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.915362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.915388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.585 [2024-12-14 16:46:54.915415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:27.585 [2024-12-14 16:46:54.915510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.585 [2024-12-14 16:46:54.915519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.915544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.915626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.915752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.915882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.915932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.915958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.915983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.915999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.916008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.916024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.916033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.916049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.916058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.916074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.916083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.916102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.916110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.916127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.916135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.916152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.916161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.916177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.916186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.916202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.916211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.916228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.916236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.916253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.916262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.918016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.918036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.918055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.918064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.918081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.918089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.918106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.918115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.918131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.918139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.918159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.918169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.918185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.918194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.918210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.586 [2024-12-14 16:46:54.918218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.918235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.918244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.586 [2024-12-14 16:46:54.918260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.586 [2024-12-14 16:46:54.918269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.918294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.918319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.918344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.918369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.918394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.918419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.918444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.918471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.918496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.918521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.918546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.918577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.918603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.918628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.918645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.918655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.919355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.919383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.919489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.919540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.919574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.919599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.919624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.919650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.919676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.919701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.919901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.919910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.921235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.587 [2024-12-14 16:46:54.921251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.921264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.921271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:27.587 [2024-12-14 16:46:54.921284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.587 [2024-12-14 16:46:54.921290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.921719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.921789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.921798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.924237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.924260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.924283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.924303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.924323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.924342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.924361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.924381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.924400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.588 [2024-12-14 16:46:54.924419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.924438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.924457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.588 [2024-12-14 16:46:54.924476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:27.588 [2024-12-14 16:46:54.924488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.924600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.924619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.924638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.924676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.924734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.924754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.924793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.924812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.924872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.924949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.924968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.924987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.924999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.925007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.925020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.925027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.925039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.925046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.925058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.589 [2024-12-14 16:46:54.925064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.925077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.925084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.925096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.925103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.925115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.925122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.925134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.925141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.925153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.925159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.925171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.925178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.925190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.589 [2024-12-14 16:46:54.925198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:27.589 [2024-12-14 16:46:54.925210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.925217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.925229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.925237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.925249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.925256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.925269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.925276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.927771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.927784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.927791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.928213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.928225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.928240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.928246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.928259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.928265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.928277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.928285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.928297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.928304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.928316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.928323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.928335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.590 [2024-12-14 16:46:54.928342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.928354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.590 [2024-12-14 16:46:54.928363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:27.590 [2024-12-14 16:46:54.928376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.928383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.928402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.928420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.928439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.928636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.928655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.928674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.928693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.928712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.928731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.928803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.928809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.929479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.929497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.929517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.929536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.591 [2024-12-14 16:46:54.929639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:27.591 [2024-12-14 16:46:54.929671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.591 [2024-12-14 16:46:54.929678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.929690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.929696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.929709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.929716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.929728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.929734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.929746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.929754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.929766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.929773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.930162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.930182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.930201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.930223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.930242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.930261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.930281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.930299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.930319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.930338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.930356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.930376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.930395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.930414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.930433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.930454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.930473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.930485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.930492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.931273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.931294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.931314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.931332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.931352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.931371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.931390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.931410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.931428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.931451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.931471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.931490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.931509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.931528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.931547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.592 [2024-12-14 16:46:54.931573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:27.592 [2024-12-14 16:46:54.931585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.592 [2024-12-14 16:46:54.931592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.931611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.931630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.931649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.931668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.931688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.931708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.931727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.931747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.931765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.931784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.931803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.931816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.931823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.932941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.932958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.932973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.932980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.932993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.932999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.933099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.933118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.933138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.933409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.933429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.933467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.933530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.933550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.933575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.933594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.593 [2024-12-14 16:46:54.933632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.933992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.593 [2024-12-14 16:46:54.933999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:27.593 [2024-12-14 16:46:54.934011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.934018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.934038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.934060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.934079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.934098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.934117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.934136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.934156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.934175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.934194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.934213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.934232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.934252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.934270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.934290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.934305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.934312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.935514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.935535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.935560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.935579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.935599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.935618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.935637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.935656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.935675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.935695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.935714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.935737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.935756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.935774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.935794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.935813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.935832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.935852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.594 [2024-12-14 16:46:54.935871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.935883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.594 [2024-12-14 16:46:54.935891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:27.594 [2024-12-14 16:46:54.937302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.937319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.937340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.937617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.937635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.937654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.937673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.937710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.937730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.937742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.937749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.938155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.938176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.938195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.938215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.938234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.938255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.938275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.938294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.938313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.938332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.938352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.938370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.938389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.938401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.938408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.939775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.595 [2024-12-14 16:46:54.939792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.939817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.939824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.939837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.595 [2024-12-14 16:46:54.939844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:27.595 [2024-12-14 16:46:54.939860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.939867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.939879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.939886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.939898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.939905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.939917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.939924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.939937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.939944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.939956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.939963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.939975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.939982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.939994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.940001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.940020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.940156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.940175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.940231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.940250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.940270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.940308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.940367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.940386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.940398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.940406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.941376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.941392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.941407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.941414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.941426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.941434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.941446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.941453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.941465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.941471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.941484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.941491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.941503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.941510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.941522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.596 [2024-12-14 16:46:54.941531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.941544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.941550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.941577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.941584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:27.596 [2024-12-14 16:46:54.941596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.596 [2024-12-14 16:46:54.941603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:27.597 [2024-12-14 16:46:54.941615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.597 [2024-12-14 16:46:54.941622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.597 [2024-12-14 16:46:54.941635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.597 [2024-12-14 16:46:54.941641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:27.597 [2024-12-14 16:46:54.941653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.597 [2024-12-14 16:46:54.941660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:27.597 [2024-12-14 16:46:54.941672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.597 [2024-12-14 16:46:54.941679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:27.597 [2024-12-14 16:46:54.941691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.597 [2024-12-14 16:46:54.941697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:27.597 [2024-12-14 16:46:54.941710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.597 [2024-12-14 16:46:54.941717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:27.597 [2024-12-14 16:46:54.941730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.597 [2024-12-14 16:46:54.941736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:27.597 [2024-12-14 16:46:54.941749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.597 [2024-12-14 16:46:54.941756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:27.597 [2024-12-14 16:46:54.941769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.597 [2024-12-14 16:46:54.941776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:27.597 10645.22 IOPS, 41.58 MiB/s [2024-12-14T15:46:57.683Z] 10692.29 IOPS, 41.77 MiB/s [2024-12-14T15:46:57.683Z] Received shutdown signal, test time was about 28.780878 seconds 00:33:27.597 00:33:27.597 Latency(us) 00:33:27.597 [2024-12-14T15:46:57.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.597 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:27.597 Verification LBA range: start 0x0 length 0x4000 00:33:27.597 Nvme0n1 : 28.78 10713.25 41.85 0.00 0.00 11928.22 152.14 3019898.88 00:33:27.597 [2024-12-14T15:46:57.683Z] =================================================================================================================== 00:33:27.597 [2024-12-14T15:46:57.683Z] Total : 10713.25 41.85 0.00 0.00 11928.22 152.14 3019898.88 00:33:27.597 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:27.597 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:27.597 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:27.597 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:27.597 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:27.597 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:27.597 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:27.597 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:27.597 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:27.597 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:27.597 rmmod nvme_tcp 00:33:27.597 rmmod nvme_fabrics 00:33:27.597 rmmod nvme_keyring 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1160356 ']' 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1160356 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1160356 ']' 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1160356 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1160356 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1160356' 00:33:27.857 killing process with pid 1160356 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1160356 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1160356 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.857 16:46:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.393 16:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:30.393 00:33:30.393 real 0m40.530s 00:33:30.393 user 1m49.886s 00:33:30.393 sys 0m11.626s 00:33:30.393 16:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.393 16:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:30.393 ************************************ 00:33:30.393 END TEST nvmf_host_multipath_status 00:33:30.393 ************************************ 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.393 ************************************ 00:33:30.393 START TEST nvmf_discovery_remove_ifc 00:33:30.393 ************************************ 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:30.393 * Looking for test storage... 00:33:30.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:30.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.393 --rc genhtml_branch_coverage=1 00:33:30.393 --rc genhtml_function_coverage=1 00:33:30.393 --rc genhtml_legend=1 00:33:30.393 --rc geninfo_all_blocks=1 00:33:30.393 --rc geninfo_unexecuted_blocks=1 00:33:30.393 00:33:30.393 ' 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:30.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.393 --rc genhtml_branch_coverage=1 00:33:30.393 --rc genhtml_function_coverage=1 00:33:30.393 --rc genhtml_legend=1 00:33:30.393 --rc geninfo_all_blocks=1 00:33:30.393 --rc geninfo_unexecuted_blocks=1 00:33:30.393 00:33:30.393 ' 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:30.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.393 --rc genhtml_branch_coverage=1 00:33:30.393 --rc genhtml_function_coverage=1 00:33:30.393 --rc genhtml_legend=1 00:33:30.393 --rc geninfo_all_blocks=1 00:33:30.393 --rc geninfo_unexecuted_blocks=1 00:33:30.393 00:33:30.393 ' 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:30.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.393 --rc genhtml_branch_coverage=1 00:33:30.393 --rc genhtml_function_coverage=1 00:33:30.393 --rc genhtml_legend=1 00:33:30.393 --rc geninfo_all_blocks=1 00:33:30.393 --rc geninfo_unexecuted_blocks=1 00:33:30.393 00:33:30.393 ' 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.393 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:30.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:30.394 16:47:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.961 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:36.962 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:36.962 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:36.962 Found net devices under 0000:af:00.0: cvl_0_0 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:36.962 Found net devices under 0000:af:00.1: cvl_0_1 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:36.962 16:47:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:36.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:33:36.962 00:33:36.962 --- 10.0.0.2 ping statistics --- 00:33:36.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.962 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:33:36.962 00:33:36.962 --- 10.0.0.1 ping statistics --- 00:33:36.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.962 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1169162 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1169162 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1169162 ']' 00:33:36.962 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.963 [2024-12-14 16:47:06.135259] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:36.963 [2024-12-14 16:47:06.135306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.963 [2024-12-14 16:47:06.211757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.963 [2024-12-14 16:47:06.232839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.963 [2024-12-14 16:47:06.232877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.963 [2024-12-14 16:47:06.232884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.963 [2024-12-14 16:47:06.232890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.963 [2024-12-14 16:47:06.232895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.963 [2024-12-14 16:47:06.233377] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.963 [2024-12-14 16:47:06.372047] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.963 [2024-12-14 16:47:06.380202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:36.963 null0 00:33:36.963 [2024-12-14 16:47:06.412208] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1169183 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1169183 /tmp/host.sock 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1169183 ']' 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:36.963 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.963 [2024-12-14 16:47:06.481919] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:36.963 [2024-12-14 16:47:06.481961] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169183 ] 00:33:36.963 [2024-12-14 16:47:06.555176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.963 [2024-12-14 16:47:06.578120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.963 16:47:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.899 [2024-12-14 16:47:07.768715] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:37.899 [2024-12-14 16:47:07.768735] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:37.899 [2024-12-14 16:47:07.768747] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:37.899 [2024-12-14 16:47:07.855008] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:37.899 [2024-12-14 16:47:07.950657] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:37.899 [2024-12-14 16:47:07.951350] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2419710:1 started. 00:33:37.899 [2024-12-14 16:47:07.952617] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:37.899 [2024-12-14 16:47:07.952654] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:37.899 [2024-12-14 16:47:07.952672] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:37.899 [2024-12-14 16:47:07.952684] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:37.899 [2024-12-14 16:47:07.952700] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:37.899 16:47:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.899 16:47:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:37.899 16:47:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.899 [2024-12-14 16:47:07.957367] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2419710 was disconnected and freed. delete nvme_qpair. 00:33:37.899 16:47:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.899 16:47:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.899 16:47:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.899 16:47:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.899 16:47:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.899 16:47:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:37.899 16:47:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.157 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:38.157 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:38.157 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:38.157 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:38.158 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:38.158 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.158 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:38.158 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.158 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:38.158 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.158 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.158 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.158 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:38.158 16:47:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:39.094 16:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.094 16:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.094 16:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.094 16:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.094 16:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.094 16:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.094 16:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.094 16:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.352 16:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:39.352 16:47:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:40.288 16:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.288 16:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.288 16:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.288 16:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.288 16:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.288 16:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.288 16:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.288 16:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.288 16:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:40.288 16:47:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.224 16:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.224 16:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.224 16:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.224 16:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.224 16:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.224 16:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.224 16:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.224 16:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.224 16:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:41.224 16:47:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:42.600 16:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.600 16:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.600 16:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.600 16:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.600 16:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.600 16:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.600 16:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.600 16:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.600 16:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:42.600 16:47:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:43.536 16:47:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.536 16:47:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.536 16:47:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.536 16:47:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.536 16:47:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.536 16:47:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.536 16:47:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.536 16:47:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.536 [2024-12-14 16:47:13.394271] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:43.536 [2024-12-14 16:47:13.394310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.536 [2024-12-14 16:47:13.394337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.536 [2024-12-14 16:47:13.394347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.536 [2024-12-14 16:47:13.394354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.536 [2024-12-14 16:47:13.394361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.536 [2024-12-14 16:47:13.394368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.536 [2024-12-14 16:47:13.394376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.536 [2024-12-14 16:47:13.394382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.536 [2024-12-14 16:47:13.394389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.536 [2024-12-14 16:47:13.394396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.536 [2024-12-14 16:47:13.394402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f5ec0 is same with the state(6) to be set 00:33:43.536 16:47:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:43.536 16:47:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:43.536 [2024-12-14 16:47:13.404293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f5ec0 (9): Bad file descriptor 00:33:43.536 [2024-12-14 16:47:13.414330] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:43.536 [2024-12-14 16:47:13.414342] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:43.536 [2024-12-14 16:47:13.414349] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:43.536 [2024-12-14 16:47:13.414354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:43.536 [2024-12-14 16:47:13.414375] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:44.470 16:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.470 16:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.470 16:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.470 16:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.470 16:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.470 16:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.470 16:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.470 [2024-12-14 16:47:14.435649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:44.470 [2024-12-14 16:47:14.435737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f5ec0 with addr=10.0.0.2, port=4420 00:33:44.470 [2024-12-14 16:47:14.435772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f5ec0 is same with the state(6) to be set 00:33:44.470 [2024-12-14 16:47:14.435830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f5ec0 (9): Bad file descriptor 00:33:44.470 [2024-12-14 16:47:14.436793] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:44.470 [2024-12-14 16:47:14.436856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:44.470 [2024-12-14 16:47:14.436879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:44.470 [2024-12-14 16:47:14.436904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:44.470 [2024-12-14 16:47:14.436924] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:44.470 [2024-12-14 16:47:14.436941] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:44.470 [2024-12-14 16:47:14.436955] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:44.470 [2024-12-14 16:47:14.436977] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:44.470 [2024-12-14 16:47:14.436992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:44.470 16:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.470 16:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:44.470 16:47:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:45.405 [2024-12-14 16:47:15.439501] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:45.405 [2024-12-14 16:47:15.439522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:45.405 [2024-12-14 16:47:15.439533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:45.405 [2024-12-14 16:47:15.439540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:45.405 [2024-12-14 16:47:15.439547] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:45.405 [2024-12-14 16:47:15.439554] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:45.405 [2024-12-14 16:47:15.439564] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:45.405 [2024-12-14 16:47:15.439568] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:45.405 [2024-12-14 16:47:15.439589] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:45.405 [2024-12-14 16:47:15.439611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.405 [2024-12-14 16:47:15.439621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.405 [2024-12-14 16:47:15.439632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.405 [2024-12-14 16:47:15.439643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.405 [2024-12-14 16:47:15.439650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.405 [2024-12-14 16:47:15.439657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.405 [2024-12-14 16:47:15.439664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.405 [2024-12-14 16:47:15.439670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.405 [2024-12-14 16:47:15.439677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.405 [2024-12-14 16:47:15.439683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.405 [2024-12-14 16:47:15.439690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:45.405 [2024-12-14 16:47:15.439984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e55e0 (9): Bad file descriptor 00:33:45.405 [2024-12-14 16:47:15.440996] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:45.405 [2024-12-14 16:47:15.441006] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:45.405 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.405 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.405 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.405 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.405 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.405 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.405 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.405 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:45.664 16:47:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:46.599 16:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:46.599 16:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:46.599 16:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:46.599 16:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.599 16:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:46.599 16:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:46.599 16:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:46.600 16:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.858 16:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:46.858 16:47:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:47.425 [2024-12-14 16:47:17.494014] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:47.425 [2024-12-14 16:47:17.494032] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:47.425 [2024-12-14 16:47:17.494043] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:47.684 [2024-12-14 16:47:17.622419] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:47.684 16:47:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:47.684 16:47:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.684 16:47:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:47.684 16:47:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.684 16:47:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:47.684 16:47:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:47.684 16:47:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:47.684 16:47:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.684 16:47:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:47.684 16:47:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:47.943 [2024-12-14 16:47:17.845507] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:47.943 [2024-12-14 16:47:17.846109] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x23f6260:1 started. 00:33:47.943 [2024-12-14 16:47:17.847120] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:47.943 [2024-12-14 16:47:17.847151] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:47.943 [2024-12-14 16:47:17.847166] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:47.943 [2024-12-14 16:47:17.847178] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:47.943 [2024-12-14 16:47:17.847185] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:47.943 [2024-12-14 16:47:17.852766] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x23f6260 was disconnected and freed. delete nvme_qpair. 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1169183 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1169183 ']' 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1169183 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1169183 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1169183' 00:33:48.880 killing process with pid 1169183 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1169183 00:33:48.880 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1169183 00:33:49.140 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:49.140 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:49.140 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:49.140 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:49.140 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:49.140 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:49.140 16:47:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:49.140 rmmod nvme_tcp 00:33:49.140 rmmod nvme_fabrics 00:33:49.140 rmmod nvme_keyring 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1169162 ']' 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1169162 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1169162 ']' 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1169162 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1169162 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1169162' 00:33:49.140 killing process with pid 1169162 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1169162 00:33:49.140 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1169162 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.399 16:47:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.304 16:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:51.304 00:33:51.304 real 0m21.297s 00:33:51.304 user 0m26.576s 00:33:51.304 sys 0m5.805s 00:33:51.304 16:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.304 16:47:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:51.304 ************************************ 00:33:51.304 END TEST nvmf_discovery_remove_ifc 00:33:51.304 ************************************ 00:33:51.304 16:47:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:51.304 16:47:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:51.304 16:47:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.304 16:47:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.564 ************************************ 00:33:51.564 START TEST nvmf_identify_kernel_target 00:33:51.564 ************************************ 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:51.564 * Looking for test storage... 00:33:51.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:51.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.564 --rc genhtml_branch_coverage=1 00:33:51.564 --rc genhtml_function_coverage=1 00:33:51.564 --rc genhtml_legend=1 00:33:51.564 --rc geninfo_all_blocks=1 00:33:51.564 --rc geninfo_unexecuted_blocks=1 00:33:51.564 00:33:51.564 ' 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:51.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.564 --rc genhtml_branch_coverage=1 00:33:51.564 --rc genhtml_function_coverage=1 00:33:51.564 --rc genhtml_legend=1 00:33:51.564 --rc geninfo_all_blocks=1 00:33:51.564 --rc geninfo_unexecuted_blocks=1 00:33:51.564 00:33:51.564 ' 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:51.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.564 --rc genhtml_branch_coverage=1 00:33:51.564 --rc genhtml_function_coverage=1 00:33:51.564 --rc genhtml_legend=1 00:33:51.564 --rc geninfo_all_blocks=1 00:33:51.564 --rc geninfo_unexecuted_blocks=1 00:33:51.564 00:33:51.564 ' 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:51.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.564 --rc genhtml_branch_coverage=1 00:33:51.564 --rc genhtml_function_coverage=1 00:33:51.564 --rc genhtml_legend=1 00:33:51.564 --rc geninfo_all_blocks=1 00:33:51.564 --rc geninfo_unexecuted_blocks=1 00:33:51.564 00:33:51.564 ' 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.564 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:51.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:51.565 16:47:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:58.136 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:58.136 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:58.137 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:58.137 Found net devices under 0000:af:00.0: cvl_0_0 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:58.137 Found net devices under 0000:af:00.1: cvl_0_1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:58.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:33:58.137 00:33:58.137 --- 10.0.0.2 ping statistics --- 00:33:58.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.137 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:33:58.137 00:33:58.137 --- 10.0.0.1 ping statistics --- 00:33:58.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.137 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:58.137 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:58.138 16:47:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:00.673 Waiting for block devices as requested 00:34:00.673 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:00.673 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:00.673 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:00.673 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:00.673 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:00.673 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:00.932 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:00.932 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:00.932 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:00.932 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:01.191 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:01.191 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:01.191 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:01.191 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:01.450 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:01.450 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:01.450 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:01.709 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:01.710 No valid GPT data, bailing 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:01.710 00:34:01.710 Discovery Log Number of Records 2, Generation counter 2 00:34:01.710 =====Discovery Log Entry 0====== 00:34:01.710 trtype: tcp 00:34:01.710 adrfam: ipv4 00:34:01.710 subtype: current discovery subsystem 00:34:01.710 treq: not specified, sq flow control disable supported 00:34:01.710 portid: 1 00:34:01.710 trsvcid: 4420 00:34:01.710 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:01.710 traddr: 10.0.0.1 00:34:01.710 eflags: none 00:34:01.710 sectype: none 00:34:01.710 =====Discovery Log Entry 1====== 00:34:01.710 trtype: tcp 00:34:01.710 adrfam: ipv4 00:34:01.710 subtype: nvme subsystem 00:34:01.710 treq: not specified, sq flow control disable supported 00:34:01.710 portid: 1 00:34:01.710 trsvcid: 4420 00:34:01.710 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:01.710 traddr: 10.0.0.1 00:34:01.710 eflags: none 00:34:01.710 sectype: none 00:34:01.710 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:01.710 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:01.970 ===================================================== 00:34:01.970 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:01.970 ===================================================== 00:34:01.970 Controller Capabilities/Features 00:34:01.970 ================================ 00:34:01.970 Vendor ID: 0000 00:34:01.970 Subsystem Vendor ID: 0000 00:34:01.970 Serial Number: f5216caee84ab9cac70c 00:34:01.971 Model Number: Linux 00:34:01.971 Firmware Version: 6.8.9-20 00:34:01.971 Recommended Arb Burst: 0 00:34:01.971 IEEE OUI Identifier: 00 00 00 00:34:01.971 Multi-path I/O 00:34:01.971 May have multiple subsystem ports: No 00:34:01.971 May have multiple controllers: No 00:34:01.971 Associated with SR-IOV VF: No 00:34:01.971 Max Data Transfer Size: Unlimited 00:34:01.971 Max Number of Namespaces: 0 00:34:01.971 Max Number of I/O Queues: 1024 00:34:01.971 NVMe Specification Version (VS): 1.3 00:34:01.971 NVMe Specification Version (Identify): 1.3 00:34:01.971 Maximum Queue Entries: 1024 00:34:01.971 Contiguous Queues Required: No 00:34:01.971 Arbitration Mechanisms Supported 00:34:01.971 Weighted Round Robin: Not Supported 00:34:01.971 Vendor Specific: Not Supported 00:34:01.971 Reset Timeout: 7500 ms 00:34:01.971 Doorbell Stride: 4 bytes 00:34:01.971 NVM Subsystem Reset: Not Supported 00:34:01.971 Command Sets Supported 00:34:01.971 NVM Command Set: Supported 00:34:01.971 Boot Partition: Not Supported 00:34:01.971 Memory Page Size Minimum: 4096 bytes 00:34:01.971 Memory Page Size Maximum: 4096 bytes 00:34:01.971 Persistent Memory Region: Not Supported 00:34:01.971 Optional Asynchronous Events Supported 00:34:01.971 Namespace Attribute Notices: Not Supported 00:34:01.971 Firmware Activation Notices: Not Supported 00:34:01.971 ANA Change Notices: Not Supported 00:34:01.971 PLE Aggregate Log Change Notices: Not Supported 00:34:01.971 LBA Status Info Alert Notices: Not Supported 00:34:01.971 EGE Aggregate Log Change Notices: Not Supported 00:34:01.971 Normal NVM Subsystem Shutdown event: Not Supported 00:34:01.971 Zone Descriptor Change Notices: Not Supported 00:34:01.971 Discovery Log Change Notices: Supported 00:34:01.971 Controller Attributes 00:34:01.971 128-bit Host Identifier: Not Supported 00:34:01.971 Non-Operational Permissive Mode: Not Supported 00:34:01.971 NVM Sets: Not Supported 00:34:01.971 Read Recovery Levels: Not Supported 00:34:01.971 Endurance Groups: Not Supported 00:34:01.971 Predictable Latency Mode: Not Supported 00:34:01.971 Traffic Based Keep ALive: Not Supported 00:34:01.971 Namespace Granularity: Not Supported 00:34:01.971 SQ Associations: Not Supported 00:34:01.971 UUID List: Not Supported 00:34:01.971 Multi-Domain Subsystem: Not Supported 00:34:01.971 Fixed Capacity Management: Not Supported 00:34:01.971 Variable Capacity Management: Not Supported 00:34:01.971 Delete Endurance Group: Not Supported 00:34:01.971 Delete NVM Set: Not Supported 00:34:01.971 Extended LBA Formats Supported: Not Supported 00:34:01.971 Flexible Data Placement Supported: Not Supported 00:34:01.971 00:34:01.971 Controller Memory Buffer Support 00:34:01.971 ================================ 00:34:01.971 Supported: No 00:34:01.971 00:34:01.971 Persistent Memory Region Support 00:34:01.971 ================================ 00:34:01.971 Supported: No 00:34:01.971 00:34:01.971 Admin Command Set Attributes 00:34:01.971 ============================ 00:34:01.971 Security Send/Receive: Not Supported 00:34:01.971 Format NVM: Not Supported 00:34:01.971 Firmware Activate/Download: Not Supported 00:34:01.971 Namespace Management: Not Supported 00:34:01.971 Device Self-Test: Not Supported 00:34:01.971 Directives: Not Supported 00:34:01.971 NVMe-MI: Not Supported 00:34:01.971 Virtualization Management: Not Supported 00:34:01.971 Doorbell Buffer Config: Not Supported 00:34:01.971 Get LBA Status Capability: Not Supported 00:34:01.971 Command & Feature Lockdown Capability: Not Supported 00:34:01.971 Abort Command Limit: 1 00:34:01.971 Async Event Request Limit: 1 00:34:01.971 Number of Firmware Slots: N/A 00:34:01.971 Firmware Slot 1 Read-Only: N/A 00:34:01.971 Firmware Activation Without Reset: N/A 00:34:01.971 Multiple Update Detection Support: N/A 00:34:01.971 Firmware Update Granularity: No Information Provided 00:34:01.971 Per-Namespace SMART Log: No 00:34:01.971 Asymmetric Namespace Access Log Page: Not Supported 00:34:01.971 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:01.971 Command Effects Log Page: Not Supported 00:34:01.971 Get Log Page Extended Data: Supported 00:34:01.971 Telemetry Log Pages: Not Supported 00:34:01.971 Persistent Event Log Pages: Not Supported 00:34:01.971 Supported Log Pages Log Page: May Support 00:34:01.971 Commands Supported & Effects Log Page: Not Supported 00:34:01.971 Feature Identifiers & Effects Log Page:May Support 00:34:01.971 NVMe-MI Commands & Effects Log Page: May Support 00:34:01.971 Data Area 4 for Telemetry Log: Not Supported 00:34:01.971 Error Log Page Entries Supported: 1 00:34:01.971 Keep Alive: Not Supported 00:34:01.971 00:34:01.971 NVM Command Set Attributes 00:34:01.971 ========================== 00:34:01.971 Submission Queue Entry Size 00:34:01.971 Max: 1 00:34:01.971 Min: 1 00:34:01.971 Completion Queue Entry Size 00:34:01.971 Max: 1 00:34:01.971 Min: 1 00:34:01.971 Number of Namespaces: 0 00:34:01.971 Compare Command: Not Supported 00:34:01.971 Write Uncorrectable Command: Not Supported 00:34:01.971 Dataset Management Command: Not Supported 00:34:01.971 Write Zeroes Command: Not Supported 00:34:01.971 Set Features Save Field: Not Supported 00:34:01.971 Reservations: Not Supported 00:34:01.971 Timestamp: Not Supported 00:34:01.971 Copy: Not Supported 00:34:01.971 Volatile Write Cache: Not Present 00:34:01.971 Atomic Write Unit (Normal): 1 00:34:01.971 Atomic Write Unit (PFail): 1 00:34:01.971 Atomic Compare & Write Unit: 1 00:34:01.971 Fused Compare & Write: Not Supported 00:34:01.971 Scatter-Gather List 00:34:01.971 SGL Command Set: Supported 00:34:01.971 SGL Keyed: Not Supported 00:34:01.971 SGL Bit Bucket Descriptor: Not Supported 00:34:01.971 SGL Metadata Pointer: Not Supported 00:34:01.971 Oversized SGL: Not Supported 00:34:01.971 SGL Metadata Address: Not Supported 00:34:01.971 SGL Offset: Supported 00:34:01.971 Transport SGL Data Block: Not Supported 00:34:01.971 Replay Protected Memory Block: Not Supported 00:34:01.971 00:34:01.971 Firmware Slot Information 00:34:01.971 ========================= 00:34:01.971 Active slot: 0 00:34:01.971 00:34:01.971 00:34:01.971 Error Log 00:34:01.971 ========= 00:34:01.971 00:34:01.971 Active Namespaces 00:34:01.971 ================= 00:34:01.971 Discovery Log Page 00:34:01.971 ================== 00:34:01.971 Generation Counter: 2 00:34:01.971 Number of Records: 2 00:34:01.971 Record Format: 0 00:34:01.971 00:34:01.971 Discovery Log Entry 0 00:34:01.971 ---------------------- 00:34:01.971 Transport Type: 3 (TCP) 00:34:01.971 Address Family: 1 (IPv4) 00:34:01.971 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:01.971 Entry Flags: 00:34:01.971 Duplicate Returned Information: 0 00:34:01.971 Explicit Persistent Connection Support for Discovery: 0 00:34:01.971 Transport Requirements: 00:34:01.971 Secure Channel: Not Specified 00:34:01.971 Port ID: 1 (0x0001) 00:34:01.971 Controller ID: 65535 (0xffff) 00:34:01.971 Admin Max SQ Size: 32 00:34:01.971 Transport Service Identifier: 4420 00:34:01.971 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:01.971 Transport Address: 10.0.0.1 00:34:01.971 Discovery Log Entry 1 00:34:01.971 ---------------------- 00:34:01.971 Transport Type: 3 (TCP) 00:34:01.971 Address Family: 1 (IPv4) 00:34:01.971 Subsystem Type: 2 (NVM Subsystem) 00:34:01.971 Entry Flags: 00:34:01.971 Duplicate Returned Information: 0 00:34:01.971 Explicit Persistent Connection Support for Discovery: 0 00:34:01.971 Transport Requirements: 00:34:01.971 Secure Channel: Not Specified 00:34:01.971 Port ID: 1 (0x0001) 00:34:01.971 Controller ID: 65535 (0xffff) 00:34:01.971 Admin Max SQ Size: 32 00:34:01.971 Transport Service Identifier: 4420 00:34:01.971 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:01.971 Transport Address: 10.0.0.1 00:34:01.971 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:01.971 get_feature(0x01) failed 00:34:01.971 get_feature(0x02) failed 00:34:01.971 get_feature(0x04) failed 00:34:01.971 ===================================================== 00:34:01.971 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:01.971 ===================================================== 00:34:01.971 Controller Capabilities/Features 00:34:01.971 ================================ 00:34:01.971 Vendor ID: 0000 00:34:01.971 Subsystem Vendor ID: 0000 00:34:01.971 Serial Number: bba72d5529cf0366ccf3 00:34:01.971 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:01.971 Firmware Version: 6.8.9-20 00:34:01.971 Recommended Arb Burst: 6 00:34:01.971 IEEE OUI Identifier: 00 00 00 00:34:01.972 Multi-path I/O 00:34:01.972 May have multiple subsystem ports: Yes 00:34:01.972 May have multiple controllers: Yes 00:34:01.972 Associated with SR-IOV VF: No 00:34:01.972 Max Data Transfer Size: Unlimited 00:34:01.972 Max Number of Namespaces: 1024 00:34:01.972 Max Number of I/O Queues: 128 00:34:01.972 NVMe Specification Version (VS): 1.3 00:34:01.972 NVMe Specification Version (Identify): 1.3 00:34:01.972 Maximum Queue Entries: 1024 00:34:01.972 Contiguous Queues Required: No 00:34:01.972 Arbitration Mechanisms Supported 00:34:01.972 Weighted Round Robin: Not Supported 00:34:01.972 Vendor Specific: Not Supported 00:34:01.972 Reset Timeout: 7500 ms 00:34:01.972 Doorbell Stride: 4 bytes 00:34:01.972 NVM Subsystem Reset: Not Supported 00:34:01.972 Command Sets Supported 00:34:01.972 NVM Command Set: Supported 00:34:01.972 Boot Partition: Not Supported 00:34:01.972 Memory Page Size Minimum: 4096 bytes 00:34:01.972 Memory Page Size Maximum: 4096 bytes 00:34:01.972 Persistent Memory Region: Not Supported 00:34:01.972 Optional Asynchronous Events Supported 00:34:01.972 Namespace Attribute Notices: Supported 00:34:01.972 Firmware Activation Notices: Not Supported 00:34:01.972 ANA Change Notices: Supported 00:34:01.972 PLE Aggregate Log Change Notices: Not Supported 00:34:01.972 LBA Status Info Alert Notices: Not Supported 00:34:01.972 EGE Aggregate Log Change Notices: Not Supported 00:34:01.972 Normal NVM Subsystem Shutdown event: Not Supported 00:34:01.972 Zone Descriptor Change Notices: Not Supported 00:34:01.972 Discovery Log Change Notices: Not Supported 00:34:01.972 Controller Attributes 00:34:01.972 128-bit Host Identifier: Supported 00:34:01.972 Non-Operational Permissive Mode: Not Supported 00:34:01.972 NVM Sets: Not Supported 00:34:01.972 Read Recovery Levels: Not Supported 00:34:01.972 Endurance Groups: Not Supported 00:34:01.972 Predictable Latency Mode: Not Supported 00:34:01.972 Traffic Based Keep ALive: Supported 00:34:01.972 Namespace Granularity: Not Supported 00:34:01.972 SQ Associations: Not Supported 00:34:01.972 UUID List: Not Supported 00:34:01.972 Multi-Domain Subsystem: Not Supported 00:34:01.972 Fixed Capacity Management: Not Supported 00:34:01.972 Variable Capacity Management: Not Supported 00:34:01.972 Delete Endurance Group: Not Supported 00:34:01.972 Delete NVM Set: Not Supported 00:34:01.972 Extended LBA Formats Supported: Not Supported 00:34:01.972 Flexible Data Placement Supported: Not Supported 00:34:01.972 00:34:01.972 Controller Memory Buffer Support 00:34:01.972 ================================ 00:34:01.972 Supported: No 00:34:01.972 00:34:01.972 Persistent Memory Region Support 00:34:01.972 ================================ 00:34:01.972 Supported: No 00:34:01.972 00:34:01.972 Admin Command Set Attributes 00:34:01.972 ============================ 00:34:01.972 Security Send/Receive: Not Supported 00:34:01.972 Format NVM: Not Supported 00:34:01.972 Firmware Activate/Download: Not Supported 00:34:01.972 Namespace Management: Not Supported 00:34:01.972 Device Self-Test: Not Supported 00:34:01.972 Directives: Not Supported 00:34:01.972 NVMe-MI: Not Supported 00:34:01.972 Virtualization Management: Not Supported 00:34:01.972 Doorbell Buffer Config: Not Supported 00:34:01.972 Get LBA Status Capability: Not Supported 00:34:01.972 Command & Feature Lockdown Capability: Not Supported 00:34:01.972 Abort Command Limit: 4 00:34:01.972 Async Event Request Limit: 4 00:34:01.972 Number of Firmware Slots: N/A 00:34:01.972 Firmware Slot 1 Read-Only: N/A 00:34:01.972 Firmware Activation Without Reset: N/A 00:34:01.972 Multiple Update Detection Support: N/A 00:34:01.972 Firmware Update Granularity: No Information Provided 00:34:01.972 Per-Namespace SMART Log: Yes 00:34:01.972 Asymmetric Namespace Access Log Page: Supported 00:34:01.972 ANA Transition Time : 10 sec 00:34:01.972 00:34:01.972 Asymmetric Namespace Access Capabilities 00:34:01.972 ANA Optimized State : Supported 00:34:01.972 ANA Non-Optimized State : Supported 00:34:01.972 ANA Inaccessible State : Supported 00:34:01.972 ANA Persistent Loss State : Supported 00:34:01.972 ANA Change State : Supported 00:34:01.972 ANAGRPID is not changed : No 00:34:01.972 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:01.972 00:34:01.972 ANA Group Identifier Maximum : 128 00:34:01.972 Number of ANA Group Identifiers : 128 00:34:01.972 Max Number of Allowed Namespaces : 1024 00:34:01.972 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:01.972 Command Effects Log Page: Supported 00:34:01.972 Get Log Page Extended Data: Supported 00:34:01.972 Telemetry Log Pages: Not Supported 00:34:01.972 Persistent Event Log Pages: Not Supported 00:34:01.972 Supported Log Pages Log Page: May Support 00:34:01.972 Commands Supported & Effects Log Page: Not Supported 00:34:01.972 Feature Identifiers & Effects Log Page:May Support 00:34:01.972 NVMe-MI Commands & Effects Log Page: May Support 00:34:01.972 Data Area 4 for Telemetry Log: Not Supported 00:34:01.972 Error Log Page Entries Supported: 128 00:34:01.972 Keep Alive: Supported 00:34:01.972 Keep Alive Granularity: 1000 ms 00:34:01.972 00:34:01.972 NVM Command Set Attributes 00:34:01.972 ========================== 00:34:01.972 Submission Queue Entry Size 00:34:01.972 Max: 64 00:34:01.972 Min: 64 00:34:01.972 Completion Queue Entry Size 00:34:01.972 Max: 16 00:34:01.972 Min: 16 00:34:01.972 Number of Namespaces: 1024 00:34:01.972 Compare Command: Not Supported 00:34:01.972 Write Uncorrectable Command: Not Supported 00:34:01.972 Dataset Management Command: Supported 00:34:01.972 Write Zeroes Command: Supported 00:34:01.972 Set Features Save Field: Not Supported 00:34:01.972 Reservations: Not Supported 00:34:01.972 Timestamp: Not Supported 00:34:01.972 Copy: Not Supported 00:34:01.972 Volatile Write Cache: Present 00:34:01.972 Atomic Write Unit (Normal): 1 00:34:01.972 Atomic Write Unit (PFail): 1 00:34:01.972 Atomic Compare & Write Unit: 1 00:34:01.972 Fused Compare & Write: Not Supported 00:34:01.972 Scatter-Gather List 00:34:01.972 SGL Command Set: Supported 00:34:01.972 SGL Keyed: Not Supported 00:34:01.972 SGL Bit Bucket Descriptor: Not Supported 00:34:01.972 SGL Metadata Pointer: Not Supported 00:34:01.972 Oversized SGL: Not Supported 00:34:01.972 SGL Metadata Address: Not Supported 00:34:01.972 SGL Offset: Supported 00:34:01.972 Transport SGL Data Block: Not Supported 00:34:01.972 Replay Protected Memory Block: Not Supported 00:34:01.972 00:34:01.972 Firmware Slot Information 00:34:01.972 ========================= 00:34:01.972 Active slot: 0 00:34:01.972 00:34:01.972 Asymmetric Namespace Access 00:34:01.972 =========================== 00:34:01.972 Change Count : 0 00:34:01.972 Number of ANA Group Descriptors : 1 00:34:01.972 ANA Group Descriptor : 0 00:34:01.972 ANA Group ID : 1 00:34:01.972 Number of NSID Values : 1 00:34:01.972 Change Count : 0 00:34:01.972 ANA State : 1 00:34:01.972 Namespace Identifier : 1 00:34:01.972 00:34:01.972 Commands Supported and Effects 00:34:01.972 ============================== 00:34:01.972 Admin Commands 00:34:01.972 -------------- 00:34:01.972 Get Log Page (02h): Supported 00:34:01.972 Identify (06h): Supported 00:34:01.972 Abort (08h): Supported 00:34:01.972 Set Features (09h): Supported 00:34:01.972 Get Features (0Ah): Supported 00:34:01.972 Asynchronous Event Request (0Ch): Supported 00:34:01.972 Keep Alive (18h): Supported 00:34:01.972 I/O Commands 00:34:01.972 ------------ 00:34:01.972 Flush (00h): Supported 00:34:01.972 Write (01h): Supported LBA-Change 00:34:01.972 Read (02h): Supported 00:34:01.972 Write Zeroes (08h): Supported LBA-Change 00:34:01.972 Dataset Management (09h): Supported 00:34:01.972 00:34:01.972 Error Log 00:34:01.972 ========= 00:34:01.972 Entry: 0 00:34:01.972 Error Count: 0x3 00:34:01.972 Submission Queue Id: 0x0 00:34:01.972 Command Id: 0x5 00:34:01.972 Phase Bit: 0 00:34:01.972 Status Code: 0x2 00:34:01.972 Status Code Type: 0x0 00:34:01.972 Do Not Retry: 1 00:34:01.972 Error Location: 0x28 00:34:01.972 LBA: 0x0 00:34:01.972 Namespace: 0x0 00:34:01.972 Vendor Log Page: 0x0 00:34:01.972 ----------- 00:34:01.972 Entry: 1 00:34:01.972 Error Count: 0x2 00:34:01.972 Submission Queue Id: 0x0 00:34:01.972 Command Id: 0x5 00:34:01.972 Phase Bit: 0 00:34:01.972 Status Code: 0x2 00:34:01.972 Status Code Type: 0x0 00:34:01.972 Do Not Retry: 1 00:34:01.972 Error Location: 0x28 00:34:01.972 LBA: 0x0 00:34:01.972 Namespace: 0x0 00:34:01.972 Vendor Log Page: 0x0 00:34:01.972 ----------- 00:34:01.972 Entry: 2 00:34:01.972 Error Count: 0x1 00:34:01.972 Submission Queue Id: 0x0 00:34:01.972 Command Id: 0x4 00:34:01.972 Phase Bit: 0 00:34:01.972 Status Code: 0x2 00:34:01.972 Status Code Type: 0x0 00:34:01.972 Do Not Retry: 1 00:34:01.972 Error Location: 0x28 00:34:01.972 LBA: 0x0 00:34:01.972 Namespace: 0x0 00:34:01.973 Vendor Log Page: 0x0 00:34:01.973 00:34:01.973 Number of Queues 00:34:01.973 ================ 00:34:01.973 Number of I/O Submission Queues: 128 00:34:01.973 Number of I/O Completion Queues: 128 00:34:01.973 00:34:01.973 ZNS Specific Controller Data 00:34:01.973 ============================ 00:34:01.973 Zone Append Size Limit: 0 00:34:01.973 00:34:01.973 00:34:01.973 Active Namespaces 00:34:01.973 ================= 00:34:01.973 get_feature(0x05) failed 00:34:01.973 Namespace ID:1 00:34:01.973 Command Set Identifier: NVM (00h) 00:34:01.973 Deallocate: Supported 00:34:01.973 Deallocated/Unwritten Error: Not Supported 00:34:01.973 Deallocated Read Value: Unknown 00:34:01.973 Deallocate in Write Zeroes: Not Supported 00:34:01.973 Deallocated Guard Field: 0xFFFF 00:34:01.973 Flush: Supported 00:34:01.973 Reservation: Not Supported 00:34:01.973 Namespace Sharing Capabilities: Multiple Controllers 00:34:01.973 Size (in LBAs): 1953525168 (931GiB) 00:34:01.973 Capacity (in LBAs): 1953525168 (931GiB) 00:34:01.973 Utilization (in LBAs): 1953525168 (931GiB) 00:34:01.973 UUID: 4f075063-a193-4120-b5d1-c02b5d077bbd 00:34:01.973 Thin Provisioning: Not Supported 00:34:01.973 Per-NS Atomic Units: Yes 00:34:01.973 Atomic Boundary Size (Normal): 0 00:34:01.973 Atomic Boundary Size (PFail): 0 00:34:01.973 Atomic Boundary Offset: 0 00:34:01.973 NGUID/EUI64 Never Reused: No 00:34:01.973 ANA group ID: 1 00:34:01.973 Namespace Write Protected: No 00:34:01.973 Number of LBA Formats: 1 00:34:01.973 Current LBA Format: LBA Format #00 00:34:01.973 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:01.973 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:01.973 rmmod nvme_tcp 00:34:01.973 rmmod nvme_fabrics 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.973 16:47:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.509 16:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:04.509 16:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:04.509 16:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:04.509 16:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:04.509 16:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:04.509 16:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:04.509 16:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:04.509 16:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:04.509 16:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:04.509 16:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:04.509 16:47:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:07.045 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:07.045 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:07.982 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:07.982 00:34:07.982 real 0m16.542s 00:34:07.982 user 0m4.304s 00:34:07.982 sys 0m8.680s 00:34:07.982 16:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.982 16:47:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.982 ************************************ 00:34:07.982 END TEST nvmf_identify_kernel_target 00:34:07.982 ************************************ 00:34:07.982 16:47:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:07.982 16:47:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:07.982 16:47:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.982 16:47:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.982 ************************************ 00:34:07.982 START TEST nvmf_auth_host 00:34:07.982 ************************************ 00:34:07.982 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:08.242 * Looking for test storage... 00:34:08.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:08.242 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:08.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.243 --rc genhtml_branch_coverage=1 00:34:08.243 --rc genhtml_function_coverage=1 00:34:08.243 --rc genhtml_legend=1 00:34:08.243 --rc geninfo_all_blocks=1 00:34:08.243 --rc geninfo_unexecuted_blocks=1 00:34:08.243 00:34:08.243 ' 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:08.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.243 --rc genhtml_branch_coverage=1 00:34:08.243 --rc genhtml_function_coverage=1 00:34:08.243 --rc genhtml_legend=1 00:34:08.243 --rc geninfo_all_blocks=1 00:34:08.243 --rc geninfo_unexecuted_blocks=1 00:34:08.243 00:34:08.243 ' 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:08.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.243 --rc genhtml_branch_coverage=1 00:34:08.243 --rc genhtml_function_coverage=1 00:34:08.243 --rc genhtml_legend=1 00:34:08.243 --rc geninfo_all_blocks=1 00:34:08.243 --rc geninfo_unexecuted_blocks=1 00:34:08.243 00:34:08.243 ' 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:08.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.243 --rc genhtml_branch_coverage=1 00:34:08.243 --rc genhtml_function_coverage=1 00:34:08.243 --rc genhtml_legend=1 00:34:08.243 --rc geninfo_all_blocks=1 00:34:08.243 --rc geninfo_unexecuted_blocks=1 00:34:08.243 00:34:08.243 ' 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:08.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.243 16:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:14.814 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:14.814 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:14.814 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:14.815 Found net devices under 0000:af:00.0: cvl_0_0 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:14.815 Found net devices under 0000:af:00.1: cvl_0_1 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:14.815 16:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:14.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:34:14.815 00:34:14.815 --- 10.0.0.2 ping statistics --- 00:34:14.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.815 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:34:14.815 00:34:14.815 --- 10.0.0.1 ping statistics --- 00:34:14.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.815 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1180967 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1180967 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1180967 ']' 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e090a6077593156197ab3f88da97611f 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.x1X 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e090a6077593156197ab3f88da97611f 0 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e090a6077593156197ab3f88da97611f 0 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e090a6077593156197ab3f88da97611f 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.x1X 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.x1X 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.x1X 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:14.815 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=25e03b3b558a83b3a3291604f6fce70a1b0483df8a4be4e99518839eb8bd4e48 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.535 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 25e03b3b558a83b3a3291604f6fce70a1b0483df8a4be4e99518839eb8bd4e48 3 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 25e03b3b558a83b3a3291604f6fce70a1b0483df8a4be4e99518839eb8bd4e48 3 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=25e03b3b558a83b3a3291604f6fce70a1b0483df8a4be4e99518839eb8bd4e48 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.535 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.535 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.535 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a53c27448c53b6795d4926b138d673e1f786b624dba83c61 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wWV 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a53c27448c53b6795d4926b138d673e1f786b624dba83c61 0 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a53c27448c53b6795d4926b138d673e1f786b624dba83c61 0 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a53c27448c53b6795d4926b138d673e1f786b624dba83c61 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wWV 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wWV 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.wWV 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6c752986009e4754465f7ed98485c9f6058ef917286f4927 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.H7p 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6c752986009e4754465f7ed98485c9f6058ef917286f4927 2 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6c752986009e4754465f7ed98485c9f6058ef917286f4927 2 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6c752986009e4754465f7ed98485c9f6058ef917286f4927 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.H7p 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.H7p 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.H7p 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=af5cf5460c24dea524cd8c059e09c068 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.y24 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key af5cf5460c24dea524cd8c059e09c068 1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 af5cf5460c24dea524cd8c059e09c068 1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=af5cf5460c24dea524cd8c059e09c068 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.y24 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.y24 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.y24 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c4b124bb5bc4e0f5f7d5b60055405bda 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8YF 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c4b124bb5bc4e0f5f7d5b60055405bda 1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c4b124bb5bc4e0f5f7d5b60055405bda 1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c4b124bb5bc4e0f5f7d5b60055405bda 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8YF 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8YF 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.8YF 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ede62c9d04f4e4f704cc00ce3c05b1ac56bdc17e0150eb68 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.DLy 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ede62c9d04f4e4f704cc00ce3c05b1ac56bdc17e0150eb68 2 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ede62c9d04f4e4f704cc00ce3c05b1ac56bdc17e0150eb68 2 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ede62c9d04f4e4f704cc00ce3c05b1ac56bdc17e0150eb68 00:34:14.816 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.DLy 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.DLy 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.DLy 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f2b1733251733443a9e65b6af9501ed5 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Tq4 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f2b1733251733443a9e65b6af9501ed5 0 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f2b1733251733443a9e65b6af9501ed5 0 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f2b1733251733443a9e65b6af9501ed5 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Tq4 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Tq4 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Tq4 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7a6a3cd9a6ef57ece11cb18d2f7cd1a9d3b2327085326d83f292a22ff70e1874 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PTE 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7a6a3cd9a6ef57ece11cb18d2f7cd1a9d3b2327085326d83f292a22ff70e1874 3 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7a6a3cd9a6ef57ece11cb18d2f7cd1a9d3b2327085326d83f292a22ff70e1874 3 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7a6a3cd9a6ef57ece11cb18d2f7cd1a9d3b2327085326d83f292a22ff70e1874 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:14.817 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:15.077 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PTE 00:34:15.077 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PTE 00:34:15.077 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.PTE 00:34:15.077 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:15.077 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1180967 00:34:15.077 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1180967 ']' 00:34:15.077 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.077 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:15.077 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.077 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:15.077 16:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.x1X 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.535 ]] 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.535 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wWV 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.H7p ]] 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.H7p 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.y24 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.8YF ]] 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8YF 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.077 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.335 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.335 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:15.335 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.DLy 00:34:15.335 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Tq4 ]] 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Tq4 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.PTE 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:15.336 16:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:17.981 Waiting for block devices as requested 00:34:17.981 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:17.981 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:18.240 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:18.240 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:18.240 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:18.240 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:18.499 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:18.499 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:18.499 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:18.499 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:18.757 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:18.757 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:18.757 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:18.757 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:19.016 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:19.016 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:19.016 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:19.583 No valid GPT data, bailing 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:19.583 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:19.842 00:34:19.842 Discovery Log Number of Records 2, Generation counter 2 00:34:19.842 =====Discovery Log Entry 0====== 00:34:19.842 trtype: tcp 00:34:19.842 adrfam: ipv4 00:34:19.842 subtype: current discovery subsystem 00:34:19.842 treq: not specified, sq flow control disable supported 00:34:19.842 portid: 1 00:34:19.842 trsvcid: 4420 00:34:19.842 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:19.842 traddr: 10.0.0.1 00:34:19.842 eflags: none 00:34:19.842 sectype: none 00:34:19.842 =====Discovery Log Entry 1====== 00:34:19.842 trtype: tcp 00:34:19.842 adrfam: ipv4 00:34:19.842 subtype: nvme subsystem 00:34:19.842 treq: not specified, sq flow control disable supported 00:34:19.842 portid: 1 00:34:19.842 trsvcid: 4420 00:34:19.842 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:19.842 traddr: 10.0.0.1 00:34:19.842 eflags: none 00:34:19.842 sectype: none 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:19.842 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.843 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.102 nvme0n1 00:34:20.102 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.102 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.102 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.102 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.102 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.102 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.102 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.102 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.102 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.102 16:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.102 nvme0n1 00:34:20.102 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.361 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.362 nvme0n1 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.362 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.621 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.621 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.621 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.621 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.621 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.621 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.621 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.622 nvme0n1 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:20.622 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.881 nvme0n1 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:20.881 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.882 16:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.140 nvme0n1 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:21.140 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.141 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.399 nvme0n1 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.399 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.658 nvme0n1 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.658 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.917 nvme0n1 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.917 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.918 16:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.177 nvme0n1 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.177 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.436 nvme0n1 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.436 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.437 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.696 nvme0n1 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.696 16:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.955 nvme0n1 00:34:22.955 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.955 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.955 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.955 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.955 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.955 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.214 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.215 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.215 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.215 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.215 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.474 nvme0n1 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.474 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.733 nvme0n1 00:34:23.733 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.733 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.733 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.734 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.993 nvme0n1 00:34:23.993 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.993 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.993 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.993 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.993 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.993 16:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.993 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.994 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.562 nvme0n1 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.562 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.822 nvme0n1 00:34:24.822 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.822 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.822 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.822 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.822 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.822 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.822 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.822 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.822 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.822 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.081 16:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.339 nvme0n1 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:25.339 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.340 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.906 nvme0n1 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.906 16:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.165 nvme0n1 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.165 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.424 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.425 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.992 nvme0n1 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.992 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.993 16:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.560 nvme0n1 00:34:27.560 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.560 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.560 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.560 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.561 16:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.129 nvme0n1 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.129 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.697 nvme0n1 00:34:28.697 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.697 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.697 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.697 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.697 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.956 16:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.524 nvme0n1 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.524 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.783 nvme0n1 00:34:29.783 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.783 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.783 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.784 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.043 nvme0n1 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.043 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.044 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.044 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.044 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:30.044 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.044 16:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.044 nvme0n1 00:34:30.044 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.044 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.044 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.044 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.044 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.044 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.303 nvme0n1 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:30.303 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.561 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.561 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.561 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.561 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 nvme0n1 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.562 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.822 nvme0n1 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.822 16:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.081 nvme0n1 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.081 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.082 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.341 nvme0n1 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.341 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.600 nvme0n1 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.600 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.859 nvme0n1 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:31.859 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.860 16:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.118 nvme0n1 00:34:32.118 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.118 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.118 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.118 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.118 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.118 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.118 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.118 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.377 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.636 nvme0n1 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.636 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.895 nvme0n1 00:34:32.895 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.895 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.895 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.896 16:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.155 nvme0n1 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.155 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.413 nvme0n1 00:34:33.413 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.413 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.413 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.413 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.413 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.413 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:33.671 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.672 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.931 nvme0n1 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.931 16:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.931 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.499 nvme0n1 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.499 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.500 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.758 nvme0n1 00:34:34.758 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.758 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.758 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.758 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.758 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.758 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.758 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.758 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.758 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.758 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.016 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.016 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.016 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:35.016 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.016 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.016 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.016 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:35.016 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:35.016 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.017 16:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.275 nvme0n1 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.275 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.841 nvme0n1 00:34:35.841 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.841 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.841 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.841 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.841 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.841 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.841 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.842 16:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.409 nvme0n1 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:36.409 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.410 16:48:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.977 nvme0n1 00:34:36.977 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.977 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.977 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.977 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.977 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.977 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.977 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.977 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.977 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.977 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.236 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.804 nvme0n1 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.804 16:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.371 nvme0n1 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:38.371 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.372 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.630 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.630 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:38.630 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.631 16:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.199 nvme0n1 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.199 nvme0n1 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.199 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.458 nvme0n1 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:39.458 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.459 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.717 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.717 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.717 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.717 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.717 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.717 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.717 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.717 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.717 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.717 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.718 nvme0n1 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.718 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.977 nvme0n1 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.977 16:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.977 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.236 nvme0n1 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.236 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.495 nvme0n1 00:34:40.495 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.495 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.495 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.495 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.496 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.755 nvme0n1 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.755 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.014 nvme0n1 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.014 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.015 16:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.274 nvme0n1 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.274 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.533 nvme0n1 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.533 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.534 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.534 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.534 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.534 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.534 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.534 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.793 nvme0n1 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.793 16:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.052 nvme0n1 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.052 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.053 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.053 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:42.053 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.053 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.311 nvme0n1 00:34:42.311 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.311 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.311 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.311 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.311 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.311 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.570 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.829 nvme0n1 00:34:42.829 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.829 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.829 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.829 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.830 16:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.088 nvme0n1 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:43.088 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.089 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.657 nvme0n1 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.657 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.915 nvme0n1 00:34:43.915 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.915 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.915 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.915 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.915 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.915 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.915 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.915 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.915 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.915 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.915 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.916 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.916 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:43.916 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.916 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.916 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:43.916 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.916 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:43.916 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:44.174 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.174 16:48:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.174 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.433 nvme0n1 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.434 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.002 nvme0n1 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.002 16:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.260 nvme0n1 00:34:45.260 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.260 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.260 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.260 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.260 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.260 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA5MGE2MDc3NTkzMTU2MTk3YWIzZjg4ZGE5NzYxMWY9/ghA: 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: ]] 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjVlMDNiM2I1NThhODNiM2EzMjkxNjA0ZjZmY2U3MGExYjA0ODNkZjhhNGJlNGU5OTUxODgzOWViOGJkNGU0OH0SdnA=: 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.519 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.087 nvme0n1 00:34:46.087 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.087 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.087 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.087 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.087 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.087 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.087 16:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.087 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.662 nvme0n1 00:34:46.662 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.662 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.662 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.662 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.662 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.662 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.662 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.662 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.662 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.662 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.662 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.663 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.664 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.664 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:46.664 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.664 16:48:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.235 nvme0n1 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWRlNjJjOWQwNGY0ZTRmNzA0Y2MwMGNlM2MwNWIxYWM1NmJkYzE3ZTAxNTBlYjY4fPsVdA==: 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: ]] 00:34:47.235 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjJiMTczMzI1MTczMzQ0M2E5ZTY1YjZhZjk1MDFlZDUf60+g: 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.494 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.062 nvme0n1 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E2YTNjZDlhNmVmNTdlY2UxMWNiMThkMmY3Y2QxYTlkM2IyMzI3MDg1MzI2ZDgzZjI5MmEyMmZmNzBlMTg3NB9r7Lw=: 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.062 16:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.630 nvme0n1 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.630 request: 00:34:48.630 { 00:34:48.630 "name": "nvme0", 00:34:48.630 "trtype": "tcp", 00:34:48.630 "traddr": "10.0.0.1", 00:34:48.630 "adrfam": "ipv4", 00:34:48.630 "trsvcid": "4420", 00:34:48.630 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:48.630 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:48.630 "prchk_reftag": false, 00:34:48.630 "prchk_guard": false, 00:34:48.630 "hdgst": false, 00:34:48.630 "ddgst": false, 00:34:48.630 "allow_unrecognized_csi": false, 00:34:48.630 "method": "bdev_nvme_attach_controller", 00:34:48.630 "req_id": 1 00:34:48.630 } 00:34:48.630 Got JSON-RPC error response 00:34:48.630 response: 00:34:48.630 { 00:34:48.630 "code": -5, 00:34:48.630 "message": "Input/output error" 00:34:48.630 } 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.630 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.889 request: 00:34:48.889 { 00:34:48.889 "name": "nvme0", 00:34:48.889 "trtype": "tcp", 00:34:48.889 "traddr": "10.0.0.1", 00:34:48.889 "adrfam": "ipv4", 00:34:48.889 "trsvcid": "4420", 00:34:48.889 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:48.889 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:48.889 "prchk_reftag": false, 00:34:48.889 "prchk_guard": false, 00:34:48.889 "hdgst": false, 00:34:48.889 "ddgst": false, 00:34:48.889 "dhchap_key": "key2", 00:34:48.889 "allow_unrecognized_csi": false, 00:34:48.889 "method": "bdev_nvme_attach_controller", 00:34:48.889 "req_id": 1 00:34:48.889 } 00:34:48.889 Got JSON-RPC error response 00:34:48.889 response: 00:34:48.889 { 00:34:48.889 "code": -5, 00:34:48.889 "message": "Input/output error" 00:34:48.889 } 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:48.889 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.890 request: 00:34:48.890 { 00:34:48.890 "name": "nvme0", 00:34:48.890 "trtype": "tcp", 00:34:48.890 "traddr": "10.0.0.1", 00:34:48.890 "adrfam": "ipv4", 00:34:48.890 "trsvcid": "4420", 00:34:48.890 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:48.890 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:48.890 "prchk_reftag": false, 00:34:48.890 "prchk_guard": false, 00:34:48.890 "hdgst": false, 00:34:48.890 "ddgst": false, 00:34:48.890 "dhchap_key": "key1", 00:34:48.890 "dhchap_ctrlr_key": "ckey2", 00:34:48.890 "allow_unrecognized_csi": false, 00:34:48.890 "method": "bdev_nvme_attach_controller", 00:34:48.890 "req_id": 1 00:34:48.890 } 00:34:48.890 Got JSON-RPC error response 00:34:48.890 response: 00:34:48.890 { 00:34:48.890 "code": -5, 00:34:48.890 "message": "Input/output error" 00:34:48.890 } 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.890 16:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.149 nvme0n1 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.149 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.408 request: 00:34:49.408 { 00:34:49.408 "name": "nvme0", 00:34:49.408 "dhchap_key": "key1", 00:34:49.408 "dhchap_ctrlr_key": "ckey2", 00:34:49.408 "method": "bdev_nvme_set_keys", 00:34:49.408 "req_id": 1 00:34:49.408 } 00:34:49.408 Got JSON-RPC error response 00:34:49.408 response: 00:34:49.408 { 00:34:49.408 "code": -13, 00:34:49.408 "message": "Permission denied" 00:34:49.408 } 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:49.408 16:48:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:50.344 16:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.344 16:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:50.344 16:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.344 16:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.344 16:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.344 16:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:50.344 16:48:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzYzI3NDQ4YzUzYjY3OTVkNDkyNmIxMzhkNjczZTFmNzg2YjYyNGRiYTgzYzYxpee9bA==: 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: ]] 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmM3NTI5ODYwMDllNDc1NDQ2NWY3ZWQ5ODQ4NWM5ZjYwNThlZjkxNzI4NmY0OTI3lZYqLA==: 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.721 nvme0n1 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY1Y2Y1NDYwYzI0ZGVhNTI0Y2Q4YzA1OWUwOWMwNjgtzFKx: 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: ]] 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzRiMTI0YmI1YmM0ZTBmNWY3ZDViNjAwNTU0MDViZGGBUsz6: 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.721 request: 00:34:51.721 { 00:34:51.721 "name": "nvme0", 00:34:51.721 "dhchap_key": "key2", 00:34:51.721 "dhchap_ctrlr_key": "ckey1", 00:34:51.721 "method": "bdev_nvme_set_keys", 00:34:51.721 "req_id": 1 00:34:51.721 } 00:34:51.721 Got JSON-RPC error response 00:34:51.721 response: 00:34:51.721 { 00:34:51.721 "code": -13, 00:34:51.721 "message": "Permission denied" 00:34:51.721 } 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:51.721 16:48:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:52.656 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.656 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:52.656 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.656 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.656 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:52.915 rmmod nvme_tcp 00:34:52.915 rmmod nvme_fabrics 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1180967 ']' 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1180967 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1180967 ']' 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1180967 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1180967 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1180967' 00:34:52.915 killing process with pid 1180967 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1180967 00:34:52.915 16:48:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1180967 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.174 16:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:55.080 16:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:58.497 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:58.497 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:58.756 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:59.016 16:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.x1X /tmp/spdk.key-null.wWV /tmp/spdk.key-sha256.y24 /tmp/spdk.key-sha384.DLy /tmp/spdk.key-sha512.PTE /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:59.016 16:48:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:01.552 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:01.552 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:01.552 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:01.811 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:01.811 00:35:01.811 real 0m53.797s 00:35:01.811 user 0m48.679s 00:35:01.811 sys 0m12.584s 00:35:01.811 16:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:01.811 16:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.811 ************************************ 00:35:01.811 END TEST nvmf_auth_host 00:35:01.811 ************************************ 00:35:01.811 16:48:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:01.811 16:48:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:01.811 16:48:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:01.811 16:48:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:01.811 16:48:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.811 ************************************ 00:35:01.811 START TEST nvmf_digest 00:35:01.811 ************************************ 00:35:01.811 16:48:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:02.071 * Looking for test storage... 00:35:02.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:02.071 16:48:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:02.071 16:48:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:35:02.071 16:48:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:02.071 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:02.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.072 --rc genhtml_branch_coverage=1 00:35:02.072 --rc genhtml_function_coverage=1 00:35:02.072 --rc genhtml_legend=1 00:35:02.072 --rc geninfo_all_blocks=1 00:35:02.072 --rc geninfo_unexecuted_blocks=1 00:35:02.072 00:35:02.072 ' 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:02.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.072 --rc genhtml_branch_coverage=1 00:35:02.072 --rc genhtml_function_coverage=1 00:35:02.072 --rc genhtml_legend=1 00:35:02.072 --rc geninfo_all_blocks=1 00:35:02.072 --rc geninfo_unexecuted_blocks=1 00:35:02.072 00:35:02.072 ' 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:02.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.072 --rc genhtml_branch_coverage=1 00:35:02.072 --rc genhtml_function_coverage=1 00:35:02.072 --rc genhtml_legend=1 00:35:02.072 --rc geninfo_all_blocks=1 00:35:02.072 --rc geninfo_unexecuted_blocks=1 00:35:02.072 00:35:02.072 ' 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:02.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.072 --rc genhtml_branch_coverage=1 00:35:02.072 --rc genhtml_function_coverage=1 00:35:02.072 --rc genhtml_legend=1 00:35:02.072 --rc geninfo_all_blocks=1 00:35:02.072 --rc geninfo_unexecuted_blocks=1 00:35:02.072 00:35:02.072 ' 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:02.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:02.072 16:48:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:08.643 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:08.644 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:08.644 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:08.644 Found net devices under 0000:af:00.0: cvl_0_0 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:08.644 Found net devices under 0000:af:00.1: cvl_0_1 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:08.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:08.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:35:08.644 00:35:08.644 --- 10.0.0.2 ping statistics --- 00:35:08.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.644 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:08.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:08.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:35:08.644 00:35:08.644 --- 10.0.0.1 ping statistics --- 00:35:08.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.644 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:08.644 16:48:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:08.644 ************************************ 00:35:08.644 START TEST nvmf_digest_clean 00:35:08.644 ************************************ 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1194962 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1194962 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1194962 ']' 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.644 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.644 [2024-12-14 16:48:38.085286] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:08.644 [2024-12-14 16:48:38.085323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:08.644 [2024-12-14 16:48:38.163327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.644 [2024-12-14 16:48:38.184242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.645 [2024-12-14 16:48:38.184279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.645 [2024-12-14 16:48:38.184286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:08.645 [2024-12-14 16:48:38.184292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:08.645 [2024-12-14 16:48:38.184297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.645 [2024-12-14 16:48:38.184832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.645 null0 00:35:08.645 [2024-12-14 16:48:38.350892] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.645 [2024-12-14 16:48:38.375096] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1194988 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1194988 /var/tmp/bperf.sock 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1194988 ']' 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.645 [2024-12-14 16:48:38.428986] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:08.645 [2024-12-14 16:48:38.429026] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194988 ] 00:35:08.645 [2024-12-14 16:48:38.505760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.645 [2024-12-14 16:48:38.528060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:08.645 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:08.904 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.904 16:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.162 nvme0n1 00:35:09.162 16:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:09.162 16:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.419 Running I/O for 2 seconds... 00:35:11.289 25152.00 IOPS, 98.25 MiB/s [2024-12-14T15:48:41.375Z] 25834.00 IOPS, 100.91 MiB/s 00:35:11.289 Latency(us) 00:35:11.289 [2024-12-14T15:48:41.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.289 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:11.289 nvme0n1 : 2.04 25348.58 99.02 0.00 0.00 4947.04 2590.23 43690.67 00:35:11.289 [2024-12-14T15:48:41.375Z] =================================================================================================================== 00:35:11.289 [2024-12-14T15:48:41.375Z] Total : 25348.58 99.02 0.00 0.00 4947.04 2590.23 43690.67 00:35:11.289 { 00:35:11.289 "results": [ 00:35:11.289 { 00:35:11.289 "job": "nvme0n1", 00:35:11.289 "core_mask": "0x2", 00:35:11.289 "workload": "randread", 00:35:11.289 "status": "finished", 00:35:11.289 "queue_depth": 128, 00:35:11.289 "io_size": 4096, 00:35:11.289 "runtime": 2.043349, 00:35:11.289 "iops": 25348.58215605851, 00:35:11.289 "mibps": 99.01789904710355, 00:35:11.289 "io_failed": 0, 00:35:11.289 "io_timeout": 0, 00:35:11.289 "avg_latency_us": 4947.042664813242, 00:35:11.289 "min_latency_us": 2590.232380952381, 00:35:11.289 "max_latency_us": 43690.666666666664 00:35:11.289 } 00:35:11.289 ], 00:35:11.289 "core_count": 1 00:35:11.289 } 00:35:11.289 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:11.547 | select(.opcode=="crc32c") 00:35:11.547 | "\(.module_name) \(.executed)"' 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1194988 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1194988 ']' 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1194988 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194988 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194988' 00:35:11.547 killing process with pid 1194988 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1194988 00:35:11.547 Received shutdown signal, test time was about 2.000000 seconds 00:35:11.547 00:35:11.547 Latency(us) 00:35:11.547 [2024-12-14T15:48:41.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.547 [2024-12-14T15:48:41.633Z] =================================================================================================================== 00:35:11.547 [2024-12-14T15:48:41.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.547 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1194988 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1195447 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1195447 /var/tmp/bperf.sock 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1195447 ']' 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.806 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:11.806 [2024-12-14 16:48:41.829207] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:11.806 [2024-12-14 16:48:41.829256] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195447 ] 00:35:11.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:11.806 Zero copy mechanism will not be used. 00:35:12.065 [2024-12-14 16:48:41.903287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.065 [2024-12-14 16:48:41.922889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.065 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.065 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:12.065 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:12.065 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:12.065 16:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:12.324 16:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.324 16:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.582 nvme0n1 00:35:12.582 16:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:12.582 16:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.582 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:12.582 Zero copy mechanism will not be used. 00:35:12.582 Running I/O for 2 seconds... 00:35:14.893 6172.00 IOPS, 771.50 MiB/s [2024-12-14T15:48:44.980Z] 5934.50 IOPS, 741.81 MiB/s 00:35:14.894 Latency(us) 00:35:14.894 [2024-12-14T15:48:44.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.894 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:14.894 nvme0n1 : 2.00 5935.26 741.91 0.00 0.00 2692.70 647.56 5554.96 00:35:14.894 [2024-12-14T15:48:44.980Z] =================================================================================================================== 00:35:14.894 [2024-12-14T15:48:44.980Z] Total : 5935.26 741.91 0.00 0.00 2692.70 647.56 5554.96 00:35:14.894 { 00:35:14.894 "results": [ 00:35:14.894 { 00:35:14.894 "job": "nvme0n1", 00:35:14.894 "core_mask": "0x2", 00:35:14.894 "workload": "randread", 00:35:14.894 "status": "finished", 00:35:14.894 "queue_depth": 16, 00:35:14.894 "io_size": 131072, 00:35:14.894 "runtime": 2.0048, 00:35:14.894 "iops": 5935.25538707103, 00:35:14.894 "mibps": 741.9069233838787, 00:35:14.894 "io_failed": 0, 00:35:14.894 "io_timeout": 0, 00:35:14.894 "avg_latency_us": 2692.7009811948988, 00:35:14.894 "min_latency_us": 647.5580952380952, 00:35:14.894 "max_latency_us": 5554.95619047619 00:35:14.894 } 00:35:14.894 ], 00:35:14.894 "core_count": 1 00:35:14.894 } 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:14.894 | select(.opcode=="crc32c") 00:35:14.894 | "\(.module_name) \(.executed)"' 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1195447 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1195447 ']' 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1195447 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195447 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195447' 00:35:14.894 killing process with pid 1195447 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1195447 00:35:14.894 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.894 00:35:14.894 Latency(us) 00:35:14.894 [2024-12-14T15:48:44.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.894 [2024-12-14T15:48:44.980Z] =================================================================================================================== 00:35:14.894 [2024-12-14T15:48:44.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.894 16:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1195447 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1196113 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1196113 /var/tmp/bperf.sock 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1196113 ']' 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.153 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:15.153 [2024-12-14 16:48:45.128011] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:15.153 [2024-12-14 16:48:45.128063] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196113 ] 00:35:15.153 [2024-12-14 16:48:45.200663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.153 [2024-12-14 16:48:45.219972] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.411 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.411 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:15.411 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:15.411 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:15.411 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:15.669 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.669 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.927 nvme0n1 00:35:15.927 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:15.927 16:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:16.185 Running I/O for 2 seconds... 00:35:18.056 28406.00 IOPS, 110.96 MiB/s [2024-12-14T15:48:48.142Z] 28562.00 IOPS, 111.57 MiB/s 00:35:18.056 Latency(us) 00:35:18.056 [2024-12-14T15:48:48.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.056 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:18.056 nvme0n1 : 2.01 28579.45 111.64 0.00 0.00 4472.88 2231.34 11546.82 00:35:18.056 [2024-12-14T15:48:48.142Z] =================================================================================================================== 00:35:18.056 [2024-12-14T15:48:48.142Z] Total : 28579.45 111.64 0.00 0.00 4472.88 2231.34 11546.82 00:35:18.056 { 00:35:18.056 "results": [ 00:35:18.056 { 00:35:18.056 "job": "nvme0n1", 00:35:18.056 "core_mask": "0x2", 00:35:18.056 "workload": "randwrite", 00:35:18.056 "status": "finished", 00:35:18.056 "queue_depth": 128, 00:35:18.056 "io_size": 4096, 00:35:18.056 "runtime": 2.005497, 00:35:18.056 "iops": 28579.449383369807, 00:35:18.056 "mibps": 111.63847415378831, 00:35:18.056 "io_failed": 0, 00:35:18.056 "io_timeout": 0, 00:35:18.056 "avg_latency_us": 4472.879941776418, 00:35:18.056 "min_latency_us": 2231.344761904762, 00:35:18.056 "max_latency_us": 11546.819047619048 00:35:18.056 } 00:35:18.056 ], 00:35:18.056 "core_count": 1 00:35:18.056 } 00:35:18.056 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:18.056 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:18.056 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:18.056 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:18.056 | select(.opcode=="crc32c") 00:35:18.056 | "\(.module_name) \(.executed)"' 00:35:18.056 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1196113 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1196113 ']' 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1196113 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196113 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196113' 00:35:18.315 killing process with pid 1196113 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1196113 00:35:18.315 Received shutdown signal, test time was about 2.000000 seconds 00:35:18.315 00:35:18.315 Latency(us) 00:35:18.315 [2024-12-14T15:48:48.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.315 [2024-12-14T15:48:48.401Z] =================================================================================================================== 00:35:18.315 [2024-12-14T15:48:48.401Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.315 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1196113 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1196579 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1196579 /var/tmp/bperf.sock 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1196579 ']' 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:18.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.574 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:18.574 [2024-12-14 16:48:48.531294] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:18.574 [2024-12-14 16:48:48.531342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196579 ] 00:35:18.574 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:18.574 Zero copy mechanism will not be used. 00:35:18.574 [2024-12-14 16:48:48.603111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.574 [2024-12-14 16:48:48.624283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.833 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.833 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:18.833 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:18.833 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:18.833 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:19.091 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.091 16:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.350 nvme0n1 00:35:19.350 16:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:19.350 16:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:19.350 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:19.350 Zero copy mechanism will not be used. 00:35:19.350 Running I/O for 2 seconds... 00:35:21.662 5995.00 IOPS, 749.38 MiB/s [2024-12-14T15:48:51.748Z] 6346.50 IOPS, 793.31 MiB/s 00:35:21.662 Latency(us) 00:35:21.662 [2024-12-14T15:48:51.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.662 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:21.662 nvme0n1 : 2.00 6345.19 793.15 0.00 0.00 2517.47 1302.92 4681.14 00:35:21.662 [2024-12-14T15:48:51.748Z] =================================================================================================================== 00:35:21.662 [2024-12-14T15:48:51.748Z] Total : 6345.19 793.15 0.00 0.00 2517.47 1302.92 4681.14 00:35:21.662 { 00:35:21.662 "results": [ 00:35:21.662 { 00:35:21.662 "job": "nvme0n1", 00:35:21.662 "core_mask": "0x2", 00:35:21.662 "workload": "randwrite", 00:35:21.662 "status": "finished", 00:35:21.662 "queue_depth": 16, 00:35:21.662 "io_size": 131072, 00:35:21.662 "runtime": 2.003565, 00:35:21.662 "iops": 6345.189699360889, 00:35:21.662 "mibps": 793.1487124201111, 00:35:21.662 "io_failed": 0, 00:35:21.662 "io_timeout": 0, 00:35:21.662 "avg_latency_us": 2517.4696311612033, 00:35:21.662 "min_latency_us": 1302.9180952380952, 00:35:21.662 "max_latency_us": 4681.142857142857 00:35:21.662 } 00:35:21.662 ], 00:35:21.662 "core_count": 1 00:35:21.662 } 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:21.662 | select(.opcode=="crc32c") 00:35:21.662 | "\(.module_name) \(.executed)"' 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1196579 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1196579 ']' 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1196579 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196579 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196579' 00:35:21.662 killing process with pid 1196579 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1196579 00:35:21.662 Received shutdown signal, test time was about 2.000000 seconds 00:35:21.662 00:35:21.662 Latency(us) 00:35:21.662 [2024-12-14T15:48:51.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.662 [2024-12-14T15:48:51.748Z] =================================================================================================================== 00:35:21.662 [2024-12-14T15:48:51.748Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.662 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1196579 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1194962 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1194962 ']' 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1194962 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194962 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194962' 00:35:21.921 killing process with pid 1194962 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1194962 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1194962 00:35:21.921 00:35:21.921 real 0m13.959s 00:35:21.921 user 0m26.811s 00:35:21.921 sys 0m4.563s 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.921 16:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:21.921 ************************************ 00:35:21.921 END TEST nvmf_digest_clean 00:35:21.921 ************************************ 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:22.180 ************************************ 00:35:22.180 START TEST nvmf_digest_error 00:35:22.180 ************************************ 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1197190 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1197190 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197190 ']' 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.180 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.180 [2024-12-14 16:48:52.121257] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:22.180 [2024-12-14 16:48:52.121302] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:22.180 [2024-12-14 16:48:52.200922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.180 [2024-12-14 16:48:52.221712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:22.180 [2024-12-14 16:48:52.221747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:22.180 [2024-12-14 16:48:52.221754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:22.180 [2024-12-14 16:48:52.221760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:22.180 [2024-12-14 16:48:52.221765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:22.180 [2024-12-14 16:48:52.222296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.439 [2024-12-14 16:48:52.310775] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.439 null0 00:35:22.439 [2024-12-14 16:48:52.396609] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:22.439 [2024-12-14 16:48:52.420800] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1197294 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1197294 /var/tmp/bperf.sock 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197294 ']' 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:22.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.439 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.439 [2024-12-14 16:48:52.471251] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:22.439 [2024-12-14 16:48:52.471290] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197294 ] 00:35:22.698 [2024-12-14 16:48:52.545232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.698 [2024-12-14 16:48:52.567167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.698 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.698 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:22.698 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:22.698 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:22.956 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:22.956 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.956 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.956 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.956 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:22.956 16:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:23.215 nvme0n1 00:35:23.215 16:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:23.215 16:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.215 16:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:23.215 16:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.215 16:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:23.215 16:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:23.215 Running I/O for 2 seconds... 00:35:23.215 [2024-12-14 16:48:53.245696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.215 [2024-12-14 16:48:53.245730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.215 [2024-12-14 16:48:53.245742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.215 [2024-12-14 16:48:53.254364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.215 [2024-12-14 16:48:53.254388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.215 [2024-12-14 16:48:53.254398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.215 [2024-12-14 16:48:53.265323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.215 [2024-12-14 16:48:53.265346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.215 [2024-12-14 16:48:53.265355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.215 [2024-12-14 16:48:53.275887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.215 [2024-12-14 16:48:53.275909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.215 [2024-12-14 16:48:53.275919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.215 [2024-12-14 16:48:53.288975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.215 [2024-12-14 16:48:53.288996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.215 [2024-12-14 16:48:53.289005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.215 [2024-12-14 16:48:53.296928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.215 [2024-12-14 16:48:53.296948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.215 [2024-12-14 16:48:53.296961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.474 [2024-12-14 16:48:53.307470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.474 [2024-12-14 16:48:53.307491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.474 [2024-12-14 16:48:53.307499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.474 [2024-12-14 16:48:53.318111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.474 [2024-12-14 16:48:53.318132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.474 [2024-12-14 16:48:53.318141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.474 [2024-12-14 16:48:53.327605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.474 [2024-12-14 16:48:53.327626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.474 [2024-12-14 16:48:53.327634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.474 [2024-12-14 16:48:53.336194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.474 [2024-12-14 16:48:53.336215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.474 [2024-12-14 16:48:53.336224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.474 [2024-12-14 16:48:53.345897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.474 [2024-12-14 16:48:53.345916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.474 [2024-12-14 16:48:53.345924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.474 [2024-12-14 16:48:53.356115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.474 [2024-12-14 16:48:53.356136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.474 [2024-12-14 16:48:53.356144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.474 [2024-12-14 16:48:53.364642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.474 [2024-12-14 16:48:53.364662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.474 [2024-12-14 16:48:53.364670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.474 [2024-12-14 16:48:53.374229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.474 [2024-12-14 16:48:53.374249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.474 [2024-12-14 16:48:53.374257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.474 [2024-12-14 16:48:53.383547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.474 [2024-12-14 16:48:53.383577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.474 [2024-12-14 16:48:53.383585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.474 [2024-12-14 16:48:53.392655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.392676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.392684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.401433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.401454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.401462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.410347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.410367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.410375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.420972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.420992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.421000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.432898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.432920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.432928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.443819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.443839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.443847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.455747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.455767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.455775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.464751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.464772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.464780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.473201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.473221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.473229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.483197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.483218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.483226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.492457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.492478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.492486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.502857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.502877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.502885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.512946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.512966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.512974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.521333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.521352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.521360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.532765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.532785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.532792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.543939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.543959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.543967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.475 [2024-12-14 16:48:53.552025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.475 [2024-12-14 16:48:53.552044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.475 [2024-12-14 16:48:53.552054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.734 [2024-12-14 16:48:53.562621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.734 [2024-12-14 16:48:53.562641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-12-14 16:48:53.562649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.734 [2024-12-14 16:48:53.574332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.734 [2024-12-14 16:48:53.574352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-12-14 16:48:53.574360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.734 [2024-12-14 16:48:53.586322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.734 [2024-12-14 16:48:53.586342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-12-14 16:48:53.586350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.734 [2024-12-14 16:48:53.597372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.734 [2024-12-14 16:48:53.597392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.734 [2024-12-14 16:48:53.597400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.734 [2024-12-14 16:48:53.605848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.734 [2024-12-14 16:48:53.605869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.605876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.617613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.617633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.617641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.626604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.626624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.626633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.637825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.637845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.637853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.650224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.650243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.650251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.661277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.661297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.661304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.669781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.669802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.669810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.681090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.681110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.681118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.689900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.689919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.689926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.699330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.699350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.699358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.710852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.710872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.710879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.722198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.722217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.722225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.731482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.731501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.731512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.739955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.739974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.739982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.749487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.749508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.749517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.758742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.758762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.758770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.767430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.767450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.767459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.776936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.776955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.776963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.788864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.788884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.788892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.799155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.799175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.799183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.735 [2024-12-14 16:48:53.811291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.735 [2024-12-14 16:48:53.811310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.735 [2024-12-14 16:48:53.811317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.994 [2024-12-14 16:48:53.819991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.994 [2024-12-14 16:48:53.820014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.994 [2024-12-14 16:48:53.820022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.994 [2024-12-14 16:48:53.833135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.994 [2024-12-14 16:48:53.833155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.994 [2024-12-14 16:48:53.833162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.994 [2024-12-14 16:48:53.844758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.994 [2024-12-14 16:48:53.844778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.994 [2024-12-14 16:48:53.844785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.994 [2024-12-14 16:48:53.854122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.994 [2024-12-14 16:48:53.854141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.994 [2024-12-14 16:48:53.854148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.863399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.863419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.863426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.872399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.872418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.872426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.882223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.882242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.882250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.890773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.890792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.890800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.902189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.902209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.902217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.914480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.914499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.914507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.925258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.925277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.925285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.937806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.937825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.937833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.946204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.946231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.946239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.955731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.955750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.955757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.967436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.967456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.967464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.978450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.978469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.978477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.987355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.987375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.987382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:53.998691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:53.998710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:53.998721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:54.011297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:54.011316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:54.011324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:54.023883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:54.023903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:54.023910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:54.032183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:54.032203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:54.032210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:54.043578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:54.043598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:54.043605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:54.056396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:54.056416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:54.056424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.995 [2024-12-14 16:48:54.068504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:23.995 [2024-12-14 16:48:54.068524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.995 [2024-12-14 16:48:54.068532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.079858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.079878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.079887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.092512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.092532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.092540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.106330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.106349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.106357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.114553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.114577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.114584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.124612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.124632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.124639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.135399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.135419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.135427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.146011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.146031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.146039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.155242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.155262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.155270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.166759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.166779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.166786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.177923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.177943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.177951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.187337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.187356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.187367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.195687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.195706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.195714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.205538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.205562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.205571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.214729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.214748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.214756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.223402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.223422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.223430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 24501.00 IOPS, 95.71 MiB/s [2024-12-14T15:48:54.340Z] [2024-12-14 16:48:54.233861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.233881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.233889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.243344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.243365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.243372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.252778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.252797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.252805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.260786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.260807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.260815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.270182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.270205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.270213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.280155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.280175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.280183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.290756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.290776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.290784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.302597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.302616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.302624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.310268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.310287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.310294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.319380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.319400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.319408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.254 [2024-12-14 16:48:54.328562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.254 [2024-12-14 16:48:54.328581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.254 [2024-12-14 16:48:54.328589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.340141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.340161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.340169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.351150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.351170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.351178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.363489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.363508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.363515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.372026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.372045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.372053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.382181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.382201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.382209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.394488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.394508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.394516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.403916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.403935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.403943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.412641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.412660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.412668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.423674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.423694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.423702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.433158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.433177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.433185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.442955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.442975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.442986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.453131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.453149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.453157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.461574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.461593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.461600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.471801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.471820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.471828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.480851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.480870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.480878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.491027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.491048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.491056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.502152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.502171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.513 [2024-12-14 16:48:54.502179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.513 [2024-12-14 16:48:54.511517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.513 [2024-12-14 16:48:54.511536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.514 [2024-12-14 16:48:54.511544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.514 [2024-12-14 16:48:54.520485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.514 [2024-12-14 16:48:54.520504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.514 [2024-12-14 16:48:54.520512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.514 [2024-12-14 16:48:54.528950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.514 [2024-12-14 16:48:54.528970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.514 [2024-12-14 16:48:54.528978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.514 [2024-12-14 16:48:54.539692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.514 [2024-12-14 16:48:54.539711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.514 [2024-12-14 16:48:54.539719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.514 [2024-12-14 16:48:54.549721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.514 [2024-12-14 16:48:54.549740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.514 [2024-12-14 16:48:54.549748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.514 [2024-12-14 16:48:54.560368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.514 [2024-12-14 16:48:54.560387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.514 [2024-12-14 16:48:54.560395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.514 [2024-12-14 16:48:54.569131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.514 [2024-12-14 16:48:54.569150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.514 [2024-12-14 16:48:54.569158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.514 [2024-12-14 16:48:54.578796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.514 [2024-12-14 16:48:54.578816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.514 [2024-12-14 16:48:54.578824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.514 [2024-12-14 16:48:54.587216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.514 [2024-12-14 16:48:54.587236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.514 [2024-12-14 16:48:54.587244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.598921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.598940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.598948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.607591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.607609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.607620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.618615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.618635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.618642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.629432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.629452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.629460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.637361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.637380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.637389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.647204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.647224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.647231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.657750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.657769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.657777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.665658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.665678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.665685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.676729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.676750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.676758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.686027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.686046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.686054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.694648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.694673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.694681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.703403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.703423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.703431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.714303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.714324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.714332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.721816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.721836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.721843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.732788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.732807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.732815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.743342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.743361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.743369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.751721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.751740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.751748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.761620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.761639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.761647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.771218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.771238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.771246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.781323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.781344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.781352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.791210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.791230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.791239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.800431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.800451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.800459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.808500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.808520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.808528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.820911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.820932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.820940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.833202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.833222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.833230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.841534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.841553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.841569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.773 [2024-12-14 16:48:54.852114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:24.773 [2024-12-14 16:48:54.852134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.773 [2024-12-14 16:48:54.852142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.863469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.863489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.863500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.873181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.873204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.873213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.881583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.881613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.881622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.892990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.893012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.893020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.902880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.902901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.902909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.911189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.911209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.911217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.923218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.923239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.923246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.934198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.934218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.934226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.942053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.942073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.942081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.953929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.953949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.953958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.962100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.962120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.962128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.973143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.973163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.973171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.984182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.984202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.984210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:54.992346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:54.992367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:54.992375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:55.002273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:55.002293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:55.002301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:55.013704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:55.013724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:55.013733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:55.023060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:55.023079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:55.023087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:55.031715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:55.031735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.032 [2024-12-14 16:48:55.031746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.032 [2024-12-14 16:48:55.041569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.032 [2024-12-14 16:48:55.041589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.033 [2024-12-14 16:48:55.041597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.033 [2024-12-14 16:48:55.050302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.033 [2024-12-14 16:48:55.050322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.033 [2024-12-14 16:48:55.050331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.033 [2024-12-14 16:48:55.059647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.033 [2024-12-14 16:48:55.059668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.033 [2024-12-14 16:48:55.059675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.033 [2024-12-14 16:48:55.069937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.033 [2024-12-14 16:48:55.069957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.033 [2024-12-14 16:48:55.069965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.033 [2024-12-14 16:48:55.078067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.033 [2024-12-14 16:48:55.078088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.033 [2024-12-14 16:48:55.078096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.033 [2024-12-14 16:48:55.087361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.033 [2024-12-14 16:48:55.087382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.033 [2024-12-14 16:48:55.087390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.033 [2024-12-14 16:48:55.096499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.033 [2024-12-14 16:48:55.096519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.033 [2024-12-14 16:48:55.096528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.033 [2024-12-14 16:48:55.105600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.033 [2024-12-14 16:48:55.105619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.033 [2024-12-14 16:48:55.105627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.033 [2024-12-14 16:48:55.114906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.033 [2024-12-14 16:48:55.114930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.033 [2024-12-14 16:48:55.114938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 [2024-12-14 16:48:55.126041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.290 [2024-12-14 16:48:55.126061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.290 [2024-12-14 16:48:55.126069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 [2024-12-14 16:48:55.134551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.290 [2024-12-14 16:48:55.134578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.290 [2024-12-14 16:48:55.134586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 [2024-12-14 16:48:55.144333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.290 [2024-12-14 16:48:55.144353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.290 [2024-12-14 16:48:55.144361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 [2024-12-14 16:48:55.157052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.290 [2024-12-14 16:48:55.157072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.290 [2024-12-14 16:48:55.157080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 [2024-12-14 16:48:55.165404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.290 [2024-12-14 16:48:55.165423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.290 [2024-12-14 16:48:55.165431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 [2024-12-14 16:48:55.176067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.290 [2024-12-14 16:48:55.176087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.290 [2024-12-14 16:48:55.176095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 [2024-12-14 16:48:55.188504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.290 [2024-12-14 16:48:55.188524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.290 [2024-12-14 16:48:55.188532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 [2024-12-14 16:48:55.199243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.290 [2024-12-14 16:48:55.199263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.290 [2024-12-14 16:48:55.199270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 [2024-12-14 16:48:55.208523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.290 [2024-12-14 16:48:55.208542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.290 [2024-12-14 16:48:55.208549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 [2024-12-14 16:48:55.220009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.290 [2024-12-14 16:48:55.220028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.290 [2024-12-14 16:48:55.220036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 25198.50 IOPS, 98.43 MiB/s [2024-12-14T15:48:55.376Z] [2024-12-14 16:48:55.229547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16e36e0) 00:35:25.290 [2024-12-14 16:48:55.229572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.290 [2024-12-14 16:48:55.229580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.290 00:35:25.290 Latency(us) 00:35:25.290 [2024-12-14T15:48:55.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.290 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:25.290 nvme0n1 : 2.01 25218.77 98.51 0.00 0.00 5069.28 2402.99 18724.57 00:35:25.290 [2024-12-14T15:48:55.376Z] =================================================================================================================== 00:35:25.290 [2024-12-14T15:48:55.376Z] Total : 25218.77 98.51 0.00 0.00 5069.28 2402.99 18724.57 00:35:25.290 { 00:35:25.290 "results": [ 00:35:25.290 { 00:35:25.290 "job": "nvme0n1", 00:35:25.290 "core_mask": "0x2", 00:35:25.290 "workload": "randread", 00:35:25.290 "status": "finished", 00:35:25.290 "queue_depth": 128, 00:35:25.290 "io_size": 4096, 00:35:25.290 "runtime": 2.006997, 00:35:25.290 "iops": 25218.772125718177, 00:35:25.290 "mibps": 98.51082861608663, 00:35:25.290 "io_failed": 0, 00:35:25.290 "io_timeout": 0, 00:35:25.290 "avg_latency_us": 5069.280605441371, 00:35:25.290 "min_latency_us": 2402.9866666666667, 00:35:25.290 "max_latency_us": 18724.571428571428 00:35:25.290 } 00:35:25.290 ], 00:35:25.290 "core_count": 1 00:35:25.290 } 00:35:25.290 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:25.290 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:25.290 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:25.290 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:25.290 | .driver_specific 00:35:25.290 | .nvme_error 00:35:25.290 | .status_code 00:35:25.290 | .command_transient_transport_error' 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1197294 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197294 ']' 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197294 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197294 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197294' 00:35:25.548 killing process with pid 1197294 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197294 00:35:25.548 Received shutdown signal, test time was about 2.000000 seconds 00:35:25.548 00:35:25.548 Latency(us) 00:35:25.548 [2024-12-14T15:48:55.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.548 [2024-12-14T15:48:55.634Z] =================================================================================================================== 00:35:25.548 [2024-12-14T15:48:55.634Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:25.548 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197294 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1197754 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1197754 /var/tmp/bperf.sock 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197754 ']' 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:25.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:25.807 [2024-12-14 16:48:55.689776] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:25.807 [2024-12-14 16:48:55.689825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197754 ] 00:35:25.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:25.807 Zero copy mechanism will not be used. 00:35:25.807 [2024-12-14 16:48:55.761714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.807 [2024-12-14 16:48:55.784104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.807 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:25.808 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:25.808 16:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:26.066 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:26.066 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.066 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.066 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.066 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:26.066 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:26.633 nvme0n1 00:35:26.633 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:26.633 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.633 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.633 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.633 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:26.633 16:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:26.633 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:26.633 Zero copy mechanism will not be used. 00:35:26.633 Running I/O for 2 seconds... 00:35:26.633 [2024-12-14 16:48:56.614901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.633 [2024-12-14 16:48:56.614936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.633 [2024-12-14 16:48:56.614947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.633 [2024-12-14 16:48:56.620011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.633 [2024-12-14 16:48:56.620035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.633 [2024-12-14 16:48:56.620044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.633 [2024-12-14 16:48:56.625235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.633 [2024-12-14 16:48:56.625257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.633 [2024-12-14 16:48:56.625265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.633 [2024-12-14 16:48:56.630504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.633 [2024-12-14 16:48:56.630525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.633 [2024-12-14 16:48:56.630534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.633 [2024-12-14 16:48:56.635738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.633 [2024-12-14 16:48:56.635763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.633 [2024-12-14 16:48:56.635772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.633 [2024-12-14 16:48:56.641066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.633 [2024-12-14 16:48:56.641088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.633 [2024-12-14 16:48:56.641095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.633 [2024-12-14 16:48:56.646509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.633 [2024-12-14 16:48:56.646532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.633 [2024-12-14 16:48:56.646541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.633 [2024-12-14 16:48:56.651713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.633 [2024-12-14 16:48:56.651734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.633 [2024-12-14 16:48:56.651743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.633 [2024-12-14 16:48:56.657020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.633 [2024-12-14 16:48:56.657042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.657050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.634 [2024-12-14 16:48:56.662293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.634 [2024-12-14 16:48:56.662315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.662322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.634 [2024-12-14 16:48:56.667494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.634 [2024-12-14 16:48:56.667515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.667524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.634 [2024-12-14 16:48:56.672762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.634 [2024-12-14 16:48:56.672784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.672792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.634 [2024-12-14 16:48:56.678034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.634 [2024-12-14 16:48:56.678056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.678065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.634 [2024-12-14 16:48:56.683253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.634 [2024-12-14 16:48:56.683275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.683284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.634 [2024-12-14 16:48:56.688513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.634 [2024-12-14 16:48:56.688534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.688542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.634 [2024-12-14 16:48:56.693825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.634 [2024-12-14 16:48:56.693847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.693856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.634 [2024-12-14 16:48:56.698977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.634 [2024-12-14 16:48:56.698998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.699005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.634 [2024-12-14 16:48:56.704143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.634 [2024-12-14 16:48:56.704165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.704173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.634 [2024-12-14 16:48:56.709274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.634 [2024-12-14 16:48:56.709295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.709304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.634 [2024-12-14 16:48:56.714337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.634 [2024-12-14 16:48:56.714359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.634 [2024-12-14 16:48:56.714367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.719479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.719500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.719509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.724650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.724671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.724683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.729753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.729772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.729780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.734831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.734853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.734861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.739877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.739897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.739905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.744904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.744925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.744933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.750014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.750035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.750043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.755154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.755176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.755183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.760182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.760203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.760210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.765237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.765257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.765264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.770302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.770326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.770334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.775350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.775372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.775379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.780445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.780467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.780475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.785489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.785510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.785518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.790581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.790602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.790610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.795641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.795662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.795670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.800713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.800733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.800741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.805753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.805774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.805782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.810817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.810838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.810846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.815918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.815939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.815947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.820984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.894 [2024-12-14 16:48:56.821005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.894 [2024-12-14 16:48:56.821013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.894 [2024-12-14 16:48:56.826069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.826090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.826099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.831164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.831184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.831192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.836265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.836284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.836292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.841375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.841395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.841406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.846501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.846520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.846528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.851519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.851540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.851549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.856520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.856541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.856553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.861644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.861665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.861673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.866774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.866794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.866803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.871920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.871943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.871951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.877008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.877030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.877038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.881994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.882015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.882024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.887050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.887072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.887080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.892053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.892073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.892081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.897099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.897120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.897129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.902179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.902199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.902207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.907211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.907232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.907240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.912248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.912269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.912277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.917341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.917361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.917369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.922309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.922330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.922337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.927392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.927413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.927421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.932468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.932488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.932496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.937542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.937568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.937576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.942585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.942605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.942616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.947649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.947670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.947678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.952691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.952712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.952720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.957732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.957763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.957771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:26.895 [2024-12-14 16:48:56.962885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.895 [2024-12-14 16:48:56.962906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.895 [2024-12-14 16:48:56.962915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:26.896 [2024-12-14 16:48:56.966319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.896 [2024-12-14 16:48:56.966339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.896 [2024-12-14 16:48:56.966347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.896 [2024-12-14 16:48:56.970383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.896 [2024-12-14 16:48:56.970404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.896 [2024-12-14 16:48:56.970412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:26.896 [2024-12-14 16:48:56.975430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:26.896 [2024-12-14 16:48:56.975452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.896 [2024-12-14 16:48:56.975460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:56.980467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:56.980488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:56.980497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:56.985522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:56.985545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:56.985553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:56.990644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:56.990665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:56.990673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:56.995813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:56.995833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:56.995841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.000998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.001019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.001027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.005909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.005929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.005936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.010748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.010769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.010777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.015614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.015634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.015642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.020582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.020602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.020610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.025482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.025503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.025510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.030405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.030424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.030432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.035278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.035298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.035306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.040204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.040225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.040233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.045110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.045131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.045139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.050007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.050027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.050035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.054850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.054871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.054879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.059634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.059655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.059663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.064401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.064420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.064428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.069382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.069403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.069414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.074509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.074530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.074537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.079616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.079637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.079645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.084664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.084685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.084695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.089673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.089693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.089701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.156 [2024-12-14 16:48:57.094702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.156 [2024-12-14 16:48:57.094722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.156 [2024-12-14 16:48:57.094731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.099689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.099710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.099718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.104719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.104740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.104749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.109788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.109809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.109817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.114893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.114917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.114926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.120003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.120023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.120032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.125168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.125189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.125197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.130273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.130294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.130302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.135339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.135359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.135367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.140426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.140449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.140457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.145462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.145483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.145491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.150505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.150525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.150533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.155563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.155583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.155591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.160626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.160646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.160654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.165723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.165744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.165751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.170846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.170867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.170875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.175881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.175902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.175910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.180962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.180982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.180990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.186021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.186042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.186049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.191069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.191090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.191097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.196142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.196163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.196171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.201251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.201271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.201282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.206323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.206343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.206351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.211432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.211452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.211460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.216512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.216533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.216541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.221718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.221739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.221747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.226854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.226874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.226881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.231972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.231992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.157 [2024-12-14 16:48:57.232002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.157 [2024-12-14 16:48:57.237083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.157 [2024-12-14 16:48:57.237104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.158 [2024-12-14 16:48:57.237112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.417 [2024-12-14 16:48:57.242292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.417 [2024-12-14 16:48:57.242314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.417 [2024-12-14 16:48:57.242322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.417 [2024-12-14 16:48:57.247445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.417 [2024-12-14 16:48:57.247466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.417 [2024-12-14 16:48:57.247474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.417 [2024-12-14 16:48:57.252499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.417 [2024-12-14 16:48:57.252520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.417 [2024-12-14 16:48:57.252528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.417 [2024-12-14 16:48:57.258426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.417 [2024-12-14 16:48:57.258448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.258456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.263681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.263702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.263710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.268792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.268813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.268821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.273883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.273904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.273912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.279012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.279034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.279041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.284133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.284153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.284161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.289253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.289273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.289284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.294351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.294372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.294379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.299460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.299481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.299488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.304553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.304580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.304588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.309677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.309698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.309705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.314758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.314782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.314790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.319882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.319903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.319910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.324990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.325011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.325019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.330086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.330106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.330114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.335184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.335209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.335217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.340307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.340327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.340334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.345438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.345459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.345467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.350725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.350746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.350754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.355809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.355830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.355838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.360858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.360878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.360885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.365955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.365976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.365983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.371549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.371578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.371586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.377380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.377402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.377410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.382648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.382670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.382678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.387748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.387769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.387778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.392891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.392913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.392921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.418 [2024-12-14 16:48:57.397995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.418 [2024-12-14 16:48:57.398016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.418 [2024-12-14 16:48:57.398024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.403055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.403076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.403085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.408115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.408136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.408145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.413225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.413246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.413254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.418415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.418436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.418444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.423522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.423543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.423554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.428659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.428679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.428687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.433717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.433737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.433746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.438837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.438857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.438864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.443920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.443941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.443948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.448963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.448983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.448991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.454093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.454114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.454122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.459184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.459205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.459213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.464229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.464249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.464257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.469389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.469409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.469417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.474537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.474564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.474572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.479686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.479707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.479716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.484803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.484825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.484833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.489979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.489999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.490007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.495121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.495142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.495151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.419 [2024-12-14 16:48:57.500287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.419 [2024-12-14 16:48:57.500307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.419 [2024-12-14 16:48:57.500315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.679 [2024-12-14 16:48:57.505456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.679 [2024-12-14 16:48:57.505477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.679 [2024-12-14 16:48:57.505486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.679 [2024-12-14 16:48:57.510595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.679 [2024-12-14 16:48:57.510615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.679 [2024-12-14 16:48:57.510626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.679 [2024-12-14 16:48:57.515669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.679 [2024-12-14 16:48:57.515690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.679 [2024-12-14 16:48:57.515697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.679 [2024-12-14 16:48:57.520770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.679 [2024-12-14 16:48:57.520792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.679 [2024-12-14 16:48:57.520800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.679 [2024-12-14 16:48:57.525826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.679 [2024-12-14 16:48:57.525848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.679 [2024-12-14 16:48:57.525856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.679 [2024-12-14 16:48:57.530911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.679 [2024-12-14 16:48:57.530932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.679 [2024-12-14 16:48:57.530940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.679 [2024-12-14 16:48:57.536052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.679 [2024-12-14 16:48:57.536073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.679 [2024-12-14 16:48:57.536081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.679 [2024-12-14 16:48:57.541130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.679 [2024-12-14 16:48:57.541151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.679 [2024-12-14 16:48:57.541159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.679 [2024-12-14 16:48:57.546175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.679 [2024-12-14 16:48:57.546195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.546203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.551276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.551297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.551305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.556346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.556373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.556382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.561510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.561531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.561540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.566667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.566687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.566695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.571809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.571828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.571836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.576889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.576911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.576919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.582000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.582021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.582029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.587080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.587101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.587109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.592186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.592207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.592215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.597326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.597348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.597355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.602452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.602474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.602482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.680 6037.00 IOPS, 754.62 MiB/s [2024-12-14T15:48:57.766Z] [2024-12-14 16:48:57.608899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.608921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.608929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.614788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.614810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.614819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.620253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.620274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.620282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.625414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.625435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.625443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.630526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.630547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.630563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.635724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.635745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.635753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.640881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.640902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.640910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.646038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.646059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.646071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.651179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.651201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.651209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.656387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.656409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.656418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.661535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.661564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.661573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.666712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.666733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.666742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.671883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.671904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.671912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.676892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.676913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.676922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.681971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.681992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.682000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.687095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.680 [2024-12-14 16:48:57.687116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.680 [2024-12-14 16:48:57.687125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.680 [2024-12-14 16:48:57.692204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.692226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.692234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.697287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.697309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.697317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.702400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.702422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.702430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.707569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.707590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.707598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.712726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.712748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.712756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.717865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.717886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.717894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.723044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.723065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.723073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.728283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.728304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.728312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.733469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.733491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.733502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.738819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.738840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.738849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.744052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.744074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.744082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.749246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.749266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.749274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.754406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.754426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.754434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.681 [2024-12-14 16:48:57.760607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.681 [2024-12-14 16:48:57.760629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.681 [2024-12-14 16:48:57.760638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.941 [2024-12-14 16:48:57.768022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.941 [2024-12-14 16:48:57.768045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.941 [2024-12-14 16:48:57.768053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.941 [2024-12-14 16:48:57.774478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.941 [2024-12-14 16:48:57.774500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.941 [2024-12-14 16:48:57.774508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.941 [2024-12-14 16:48:57.781001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.941 [2024-12-14 16:48:57.781023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.781031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.787141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.787166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.787175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.793345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.793368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.793376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.798770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.798791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.798799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.804089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.804112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.804120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.810327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.810349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.810357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.817453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.817475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.817483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.825493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.825515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.825524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.831578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.831601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.831609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.838740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.838762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.838770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.846226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.846247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.846255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.854494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.854515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.854523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.862872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.862893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.862902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.870973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.870995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.871004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.879067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.879090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.879098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.884959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.884980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.884989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.890255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.890276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.890283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.895505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.895525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.895534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.900996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.901017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.901029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.905853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.905875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.905883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.911133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.911154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.911162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.916478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.916500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.916509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.921755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.921776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.921784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.927188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.927210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.927218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.932538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.932566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.942 [2024-12-14 16:48:57.932574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.942 [2024-12-14 16:48:57.937832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.942 [2024-12-14 16:48:57.937853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.937861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.943138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.943160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.943168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.948517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.948551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.948565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.953796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.953816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.953824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.958969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.958988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.958996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.964219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.964240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.964248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.967701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.967721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.967728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.972100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.972121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.972128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.977343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.977364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.977372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.982652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.982673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.982680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.987912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.987933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.987940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.993101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.993122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.993129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:57.998434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:57.998454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:57.998462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:58.003729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:58.003749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:58.003757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:58.008811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:58.008831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:58.008839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:58.013980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:58.014001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:58.014008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:58.019161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:58.019182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:58.019190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:27.943 [2024-12-14 16:48:58.024410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:27.943 [2024-12-14 16:48:58.024431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.943 [2024-12-14 16:48:58.024439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.029727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.029747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.203 [2024-12-14 16:48:58.029755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.035006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.035027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.203 [2024-12-14 16:48:58.035038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.040230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.040251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.203 [2024-12-14 16:48:58.040258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.045348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.045368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.203 [2024-12-14 16:48:58.045376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.050537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.050562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.203 [2024-12-14 16:48:58.050570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.055759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.055778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.203 [2024-12-14 16:48:58.055786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.060999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.061022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.203 [2024-12-14 16:48:58.061030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.065866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.065887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.203 [2024-12-14 16:48:58.065894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.070918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.070939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.203 [2024-12-14 16:48:58.070947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.075890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.075910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.203 [2024-12-14 16:48:58.075918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.080831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.080852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.203 [2024-12-14 16:48:58.080859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.203 [2024-12-14 16:48:58.086195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.203 [2024-12-14 16:48:58.086217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.086225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.091895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.091917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.091924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.097854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.097875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.097882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.103275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.103295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.103303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.108811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.108831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.108839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.114217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.114237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.114244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.119541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.119567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.119575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.124888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.124909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.124920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.130232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.130252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.130259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.135606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.135628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.135636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.141012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.141032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.141040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.146175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.146195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.146202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.151463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.151483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.151491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.156520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.156539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.156547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.161818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.161839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.161846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.167186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.167205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.167213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.172820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.172846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.172853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.178306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.178326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.178334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.183629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.183650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.183658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.188951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.188972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.188980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.194310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.194331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.194338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.199676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.199697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.199706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.204878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.204898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.204906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.210199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.210220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.210228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.215505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.215525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.215534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.220745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.220765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.220772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.226042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.226062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.226070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.231343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.204 [2024-12-14 16:48:58.231363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.204 [2024-12-14 16:48:58.231371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.204 [2024-12-14 16:48:58.236619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.205 [2024-12-14 16:48:58.236639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.205 [2024-12-14 16:48:58.236647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.205 [2024-12-14 16:48:58.241857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.205 [2024-12-14 16:48:58.241877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.205 [2024-12-14 16:48:58.241885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.205 [2024-12-14 16:48:58.247254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.205 [2024-12-14 16:48:58.247274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.205 [2024-12-14 16:48:58.247282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.205 [2024-12-14 16:48:58.252635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.205 [2024-12-14 16:48:58.252656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.205 [2024-12-14 16:48:58.252664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.205 [2024-12-14 16:48:58.257940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.205 [2024-12-14 16:48:58.257960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.205 [2024-12-14 16:48:58.257967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.205 [2024-12-14 16:48:58.263230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.205 [2024-12-14 16:48:58.263250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.205 [2024-12-14 16:48:58.263261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.205 [2024-12-14 16:48:58.268485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.205 [2024-12-14 16:48:58.268505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.205 [2024-12-14 16:48:58.268513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.205 [2024-12-14 16:48:58.273900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.205 [2024-12-14 16:48:58.273920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.205 [2024-12-14 16:48:58.273928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.205 [2024-12-14 16:48:58.279175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.205 [2024-12-14 16:48:58.279195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.205 [2024-12-14 16:48:58.279203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.205 [2024-12-14 16:48:58.284686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.205 [2024-12-14 16:48:58.284707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.205 [2024-12-14 16:48:58.284715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.290522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.290543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.290551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.295980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.296000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.296008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.301306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.301326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.301334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.306636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.306657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.306665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.311952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.311973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.311981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.317319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.317340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.317348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.322688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.322708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.322716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.327996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.328016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.328024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.333149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.333169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.333177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.338397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.338417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.338425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.343677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.343697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.343705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.349206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.349226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.349234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.354392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.354412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.354423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.359643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.359663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.359670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.364857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.364877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.364885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.370194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.370214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.370222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.375581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.375601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.375609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.380867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.380886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.380893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.386334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.386353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.386361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.391597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.391617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.391625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.396859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.396879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.396886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.402485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.402509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.402517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.408177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.408197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.408205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.413469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.413489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.413498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.465 [2024-12-14 16:48:58.418728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.465 [2024-12-14 16:48:58.418749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.465 [2024-12-14 16:48:58.418757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.424062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.424083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.424091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.429376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.429396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.429404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.434703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.434724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.434732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.439763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.439783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.439791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.444894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.444914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.444922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.450175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.450196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.450203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.455326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.455346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.455353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.460661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.460681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.460689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.466119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.466139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.466147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.471386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.471405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.471413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.476709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.476730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.476738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.481844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.481864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.481873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.486963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.486984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.486992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.492128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.492149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.492160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.497334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.497355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.497363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.502608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.502628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.502636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.507848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.507868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.507876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.513140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.513162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.513170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.518506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.518527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.518535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.524105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.524127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.524135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.529441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.529462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.529470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.534652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.534673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.534681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.539883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.539907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.539914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.466 [2024-12-14 16:48:58.545210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.466 [2024-12-14 16:48:58.545230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.466 [2024-12-14 16:48:58.545238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.726 [2024-12-14 16:48:58.550452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.550473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.550481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.726 [2024-12-14 16:48:58.555718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.555739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.555746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.726 [2024-12-14 16:48:58.561096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.561116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.561124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.726 [2024-12-14 16:48:58.566225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.566245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.566252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.726 [2024-12-14 16:48:58.571549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.571575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.571583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.726 [2024-12-14 16:48:58.576952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.576972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.576980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.726 [2024-12-14 16:48:58.582338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.582359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.582366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.726 [2024-12-14 16:48:58.587900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.587921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.587929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.726 [2024-12-14 16:48:58.593214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.593235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.593242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:28.726 [2024-12-14 16:48:58.598514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.598535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.598542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.726 [2024-12-14 16:48:58.603754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.603774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.603782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.726 5870.50 IOPS, 733.81 MiB/s [2024-12-14T15:48:58.812Z] [2024-12-14 16:48:58.610201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d06130) 00:35:28.726 [2024-12-14 16:48:58.610219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.726 [2024-12-14 16:48:58.610226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.726 00:35:28.726 Latency(us) 00:35:28.726 [2024-12-14T15:48:58.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.726 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:28.726 nvme0n1 : 2.00 5872.92 734.12 0.00 0.00 2721.14 612.45 13793.77 00:35:28.726 [2024-12-14T15:48:58.812Z] =================================================================================================================== 00:35:28.726 [2024-12-14T15:48:58.812Z] Total : 5872.92 734.12 0.00 0.00 2721.14 612.45 13793.77 00:35:28.726 { 00:35:28.726 "results": [ 00:35:28.726 { 00:35:28.726 "job": "nvme0n1", 00:35:28.726 "core_mask": "0x2", 00:35:28.726 "workload": "randread", 00:35:28.726 "status": "finished", 00:35:28.726 "queue_depth": 16, 00:35:28.726 "io_size": 131072, 00:35:28.726 "runtime": 2.0019, 00:35:28.726 "iops": 5872.920725310954, 00:35:28.726 "mibps": 734.1150906638693, 00:35:28.726 "io_failed": 0, 00:35:28.726 "io_timeout": 0, 00:35:28.726 "avg_latency_us": 2721.140240343139, 00:35:28.726 "min_latency_us": 612.4495238095238, 00:35:28.726 "max_latency_us": 13793.76761904762 00:35:28.726 } 00:35:28.726 ], 00:35:28.726 "core_count": 1 00:35:28.726 } 00:35:28.726 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:28.726 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:28.726 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:28.726 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:28.726 | .driver_specific 00:35:28.726 | .nvme_error 00:35:28.726 | .status_code 00:35:28.726 | .command_transient_transport_error' 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 380 > 0 )) 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1197754 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197754 ']' 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197754 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197754 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197754' 00:35:28.985 killing process with pid 1197754 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197754 00:35:28.985 Received shutdown signal, test time was about 2.000000 seconds 00:35:28.985 00:35:28.985 Latency(us) 00:35:28.985 [2024-12-14T15:48:59.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.985 [2024-12-14T15:48:59.071Z] =================================================================================================================== 00:35:28.985 [2024-12-14T15:48:59.071Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:28.985 16:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197754 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1198255 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1198255 /var/tmp/bperf.sock 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1198255 ']' 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:28.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:28.986 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:29.244 [2024-12-14 16:48:59.084194] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:29.244 [2024-12-14 16:48:59.084247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198255 ] 00:35:29.244 [2024-12-14 16:48:59.159867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.244 [2024-12-14 16:48:59.181067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.244 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:29.244 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:29.244 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:29.244 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:29.503 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:29.503 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.503 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:29.503 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.503 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:29.503 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:30.070 nvme0n1 00:35:30.070 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:30.070 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.070 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:30.070 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.070 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:30.070 16:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:30.070 Running I/O for 2 seconds... 00:35:30.070 [2024-12-14 16:49:00.064875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.070 [2024-12-14 16:49:00.065014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.070 [2024-12-14 16:49:00.065043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.070 [2024-12-14 16:49:00.074421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.070 [2024-12-14 16:49:00.074571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.070 [2024-12-14 16:49:00.074591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.070 [2024-12-14 16:49:00.084012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.070 [2024-12-14 16:49:00.084135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.070 [2024-12-14 16:49:00.084158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.070 [2024-12-14 16:49:00.093402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.070 [2024-12-14 16:49:00.093535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.070 [2024-12-14 16:49:00.093553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.070 [2024-12-14 16:49:00.103093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.070 [2024-12-14 16:49:00.103254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.070 [2024-12-14 16:49:00.103272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.070 [2024-12-14 16:49:00.113996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.070 [2024-12-14 16:49:00.114131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.070 [2024-12-14 16:49:00.114150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.070 [2024-12-14 16:49:00.123656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.070 [2024-12-14 16:49:00.123782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.070 [2024-12-14 16:49:00.123801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.070 [2024-12-14 16:49:00.133125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.070 [2024-12-14 16:49:00.133249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.070 [2024-12-14 16:49:00.133268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.070 [2024-12-14 16:49:00.142492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.070 [2024-12-14 16:49:00.142619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.070 [2024-12-14 16:49:00.142638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.070 [2024-12-14 16:49:00.151854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.070 [2024-12-14 16:49:00.151976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.070 [2024-12-14 16:49:00.151995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.161365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.161484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.161503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.170711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.170836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.170853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.179992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.180114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.180132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.189393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.189516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.189535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.198711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.198836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.198854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.208144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.208265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.208283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.217434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.217554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.217576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.226786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.226908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.226926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.236085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.236206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.236225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.245513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.245653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.245672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.254838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.254961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.254979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.264278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.264411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.264429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.273553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.273682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.273700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.282888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.283009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.283027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.292260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.292383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.292402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.301562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.301684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.301702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.330 [2024-12-14 16:49:00.310884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.330 [2024-12-14 16:49:00.311004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.330 [2024-12-14 16:49:00.311022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.331 [2024-12-14 16:49:00.320211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.331 [2024-12-14 16:49:00.320331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.331 [2024-12-14 16:49:00.320348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.331 [2024-12-14 16:49:00.329684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.331 [2024-12-14 16:49:00.329804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.331 [2024-12-14 16:49:00.329829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.331 [2024-12-14 16:49:00.338974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.331 [2024-12-14 16:49:00.339098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.331 [2024-12-14 16:49:00.339117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.331 [2024-12-14 16:49:00.348261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.331 [2024-12-14 16:49:00.348384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.331 [2024-12-14 16:49:00.348402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.331 [2024-12-14 16:49:00.357905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.331 [2024-12-14 16:49:00.358029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.331 [2024-12-14 16:49:00.358047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.331 [2024-12-14 16:49:00.367208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.331 [2024-12-14 16:49:00.367331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.331 [2024-12-14 16:49:00.367349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.331 [2024-12-14 16:49:00.376544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.331 [2024-12-14 16:49:00.376678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.331 [2024-12-14 16:49:00.376697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.331 [2024-12-14 16:49:00.385867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.331 [2024-12-14 16:49:00.385988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.331 [2024-12-14 16:49:00.386005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.331 [2024-12-14 16:49:00.395272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.331 [2024-12-14 16:49:00.395393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.331 [2024-12-14 16:49:00.395411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.331 [2024-12-14 16:49:00.404602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.331 [2024-12-14 16:49:00.404723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.331 [2024-12-14 16:49:00.404741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.331 [2024-12-14 16:49:00.414599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.414735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.414755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.424241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.424367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.424386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.433539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.433669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.433689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.442833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.442956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.442973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.452234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.452356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.452374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.461501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.461633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.461651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.470767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.470889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.470907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.480038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.480158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.480176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.489405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.489529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.489547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.498708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.498832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.498851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.508015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.508138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.508156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.517332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.517454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.517473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.526628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.526747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.526765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.535942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.536062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.536080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.545234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.545357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.545375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.554541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.554674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.554692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.563883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.564005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.564023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.573179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.573299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.573320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.582660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.582781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.590 [2024-12-14 16:49:00.582800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.590 [2024-12-14 16:49:00.591955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.590 [2024-12-14 16:49:00.592078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.591 [2024-12-14 16:49:00.592096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.591 [2024-12-14 16:49:00.601253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.591 [2024-12-14 16:49:00.601376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.591 [2024-12-14 16:49:00.601394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.591 [2024-12-14 16:49:00.610582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.591 [2024-12-14 16:49:00.610704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.591 [2024-12-14 16:49:00.610722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.591 [2024-12-14 16:49:00.619891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.591 [2024-12-14 16:49:00.620014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.591 [2024-12-14 16:49:00.620032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.591 [2024-12-14 16:49:00.629191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.591 [2024-12-14 16:49:00.629315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.591 [2024-12-14 16:49:00.629332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.591 [2024-12-14 16:49:00.638600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.591 [2024-12-14 16:49:00.638735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.591 [2024-12-14 16:49:00.638753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.591 [2024-12-14 16:49:00.647915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.591 [2024-12-14 16:49:00.648035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.591 [2024-12-14 16:49:00.648053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.591 [2024-12-14 16:49:00.657210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.591 [2024-12-14 16:49:00.657339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.591 [2024-12-14 16:49:00.657357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.591 [2024-12-14 16:49:00.666515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.591 [2024-12-14 16:49:00.666644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.591 [2024-12-14 16:49:00.666662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.675967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.676092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.676110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.685407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.685529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.685547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.694683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.694806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.694823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.703998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.704120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.704139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.713295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.713414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.713432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.722598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.722716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.722734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.731892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.732013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.732031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.741187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.741307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.741325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.750489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.750620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.750638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.759786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.759908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.759926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.769083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.769205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.769224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.778376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.778497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.778515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.787683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.787805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.787823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.796974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.797096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.797114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.806261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.806384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.806402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.815554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.850 [2024-12-14 16:49:00.815681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.850 [2024-12-14 16:49:00.815702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.850 [2024-12-14 16:49:00.824844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.824964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.824981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.851 [2024-12-14 16:49:00.834278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.834400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.834419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.851 [2024-12-14 16:49:00.843587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.843710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.843728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.851 [2024-12-14 16:49:00.852867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.852987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.853005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.851 [2024-12-14 16:49:00.862202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.862324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.862342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.851 [2024-12-14 16:49:00.871488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.871620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.871638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.851 [2024-12-14 16:49:00.880792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.880918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.880935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.851 [2024-12-14 16:49:00.890082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.890202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.890220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.851 [2024-12-14 16:49:00.899364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.899490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.899509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.851 [2024-12-14 16:49:00.908665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.908790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.908808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.851 [2024-12-14 16:49:00.917965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.918089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.918107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.851 [2024-12-14 16:49:00.927244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:30.851 [2024-12-14 16:49:00.927366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.851 [2024-12-14 16:49:00.927384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.113 [2024-12-14 16:49:00.936772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.113 [2024-12-14 16:49:00.936894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.113 [2024-12-14 16:49:00.936912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.113 [2024-12-14 16:49:00.946161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.113 [2024-12-14 16:49:00.946283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.113 [2024-12-14 16:49:00.946301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.113 [2024-12-14 16:49:00.955442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.113 [2024-12-14 16:49:00.955568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.113 [2024-12-14 16:49:00.955586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.113 [2024-12-14 16:49:00.964735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.113 [2024-12-14 16:49:00.964857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.113 [2024-12-14 16:49:00.964875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.113 [2024-12-14 16:49:00.974020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.113 [2024-12-14 16:49:00.974141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.113 [2024-12-14 16:49:00.974160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.113 [2024-12-14 16:49:00.983296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.113 [2024-12-14 16:49:00.983418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.113 [2024-12-14 16:49:00.983437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.113 [2024-12-14 16:49:00.992588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.113 [2024-12-14 16:49:00.992711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.113 [2024-12-14 16:49:00.992730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.113 [2024-12-14 16:49:01.001882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.113 [2024-12-14 16:49:01.002003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.113 [2024-12-14 16:49:01.002021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.011177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.011297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.011314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.020450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.020570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.020588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.029738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.029860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.029879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.039032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.039151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.039170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.048313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.048434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.048452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 27192.00 IOPS, 106.22 MiB/s [2024-12-14T15:49:01.200Z] [2024-12-14 16:49:01.057576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.057698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.057720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.066882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.067002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.067020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.076176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.076298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.076316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.085544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.085671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.085690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.094821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.094941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.094959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.104105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.104227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.104245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.113380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.113499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.113517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.122720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.122858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.122876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.132108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.132230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.132248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.141402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.141530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.141548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.150675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.150797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.150815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.160021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.160142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.160161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.169310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.169429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.169447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.178589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.178710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.178728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.187875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.114 [2024-12-14 16:49:01.187994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.114 [2024-12-14 16:49:01.188012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.114 [2024-12-14 16:49:01.197351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.197475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.197494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.206869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.206990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.207008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.216217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.216339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.216357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.225495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.225627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.225653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.234800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.234921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.234938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.244105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.244228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.244247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.253409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.253531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.253550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.262681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.262805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.262823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.271975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.272097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.272115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.281259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.281382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.281400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.290604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.290725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.290744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.299896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.375 [2024-12-14 16:49:01.300016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.375 [2024-12-14 16:49:01.300037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.375 [2024-12-14 16:49:01.309188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.309310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.309328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.318494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.318621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.318640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.327788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.327910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.327929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.337242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.337365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.337384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.346523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.346651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.346668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.355815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.355936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.355954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.365094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.365215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.365233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.374372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.374493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.374512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.383638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.383759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.383778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.392941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.393064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.393081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.402216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.402340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.402357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.411523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.411655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.411673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.420954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.421080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.421099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.430260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.430382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.430400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.439519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.439651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.439669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.448829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.448949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.448967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.376 [2024-12-14 16:49:01.458164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.376 [2024-12-14 16:49:01.458289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.376 [2024-12-14 16:49:01.458310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.467682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.467805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.467823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.476980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.477103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.477121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.486244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.486365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.486383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.495528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.495655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.495674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.504814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.504935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.504953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.514110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.514233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.514251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.523374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.523496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.523513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.532645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.532765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.532783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.541902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.542029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.542047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.551142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.551262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.551279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.560397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.560519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.560536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.569630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.569751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.569769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.578864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.578984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.579002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.588218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.588340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.588358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.597464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.597592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.597610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.606694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.606814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.606833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.615931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.616051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.616068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.625170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.625290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.625308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.634388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.634508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.634526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.643657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.643779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.643797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.652909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.653032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.653050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.662164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.662296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.662314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.671398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.671518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.671536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.680630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.635 [2024-12-14 16:49:01.680751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.635 [2024-12-14 16:49:01.680769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.635 [2024-12-14 16:49:01.689879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.636 [2024-12-14 16:49:01.689997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.636 [2024-12-14 16:49:01.690016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.636 [2024-12-14 16:49:01.699151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.636 [2024-12-14 16:49:01.699271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.636 [2024-12-14 16:49:01.699294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.636 [2024-12-14 16:49:01.708499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.636 [2024-12-14 16:49:01.708631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.636 [2024-12-14 16:49:01.708649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.636 [2024-12-14 16:49:01.717910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.636 [2024-12-14 16:49:01.718034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.636 [2024-12-14 16:49:01.718052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.894 [2024-12-14 16:49:01.727359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.894 [2024-12-14 16:49:01.727479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.894 [2024-12-14 16:49:01.727496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.894 [2024-12-14 16:49:01.736615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.894 [2024-12-14 16:49:01.736738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.894 [2024-12-14 16:49:01.736756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.894 [2024-12-14 16:49:01.745860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.745979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.745997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.755108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.755229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.755247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.764366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.764486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.764503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.773628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.773749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.773767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.782858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.782985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.783003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.792269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.792390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.792410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.801492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.801622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.801639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.810750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.810871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.810888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.819983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.820105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.820123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.829251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.829374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.829392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.838862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.838986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.839005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.848227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.848348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.848365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.857499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.857626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.857646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.866749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.866871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.866889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.876003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.876123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.876140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.885176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.885296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.885313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.894523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.894652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.894671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.904028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.904165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.904183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.913552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.913684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.913702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.923022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.923144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.923161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.932292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.932411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.932429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.941529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.941657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.941679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.950784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.950905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.950924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.960044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.960165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.960183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.969286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.895 [2024-12-14 16:49:01.969405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.895 [2024-12-14 16:49:01.969423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:31.895 [2024-12-14 16:49:01.978667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:31.896 [2024-12-14 16:49:01.978791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.896 [2024-12-14 16:49:01.978810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:32.154 [2024-12-14 16:49:01.988100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:32.154 [2024-12-14 16:49:01.988221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.154 [2024-12-14 16:49:01.988239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:32.154 [2024-12-14 16:49:01.997334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:32.154 [2024-12-14 16:49:01.997457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.154 [2024-12-14 16:49:01.997475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:32.154 [2024-12-14 16:49:02.006595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:32.154 [2024-12-14 16:49:02.006718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.154 [2024-12-14 16:49:02.006736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:32.154 [2024-12-14 16:49:02.015837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:32.154 [2024-12-14 16:49:02.015958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.154 [2024-12-14 16:49:02.015976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:32.154 [2024-12-14 16:49:02.025093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:32.154 [2024-12-14 16:49:02.025219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.154 [2024-12-14 16:49:02.025237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:32.154 [2024-12-14 16:49:02.034353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:32.154 [2024-12-14 16:49:02.034476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.154 [2024-12-14 16:49:02.034494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:32.154 [2024-12-14 16:49:02.043592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:32.154 [2024-12-14 16:49:02.043716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.154 [2024-12-14 16:49:02.043734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:32.154 [2024-12-14 16:49:02.052847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4cdc0) with pdu=0x200016efdeb0 00:35:32.154 [2024-12-14 16:49:02.052968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.155 [2024-12-14 16:49:02.052986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:32.155 27338.00 IOPS, 106.79 MiB/s 00:35:32.155 Latency(us) 00:35:32.155 [2024-12-14T15:49:02.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.155 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:32.155 nvme0n1 : 2.00 27338.17 106.79 0.00 0.00 4674.39 3542.06 11047.50 00:35:32.155 [2024-12-14T15:49:02.241Z] =================================================================================================================== 00:35:32.155 [2024-12-14T15:49:02.241Z] Total : 27338.17 106.79 0.00 0.00 4674.39 3542.06 11047.50 00:35:32.155 { 00:35:32.155 "results": [ 00:35:32.155 { 00:35:32.155 "job": "nvme0n1", 00:35:32.155 "core_mask": "0x2", 00:35:32.155 "workload": "randwrite", 00:35:32.155 "status": "finished", 00:35:32.155 "queue_depth": 128, 00:35:32.155 "io_size": 4096, 00:35:32.155 "runtime": 2.004377, 00:35:32.155 "iops": 27338.170414048855, 00:35:32.155 "mibps": 106.78972817987834, 00:35:32.155 "io_failed": 0, 00:35:32.155 "io_timeout": 0, 00:35:32.155 "avg_latency_us": 4674.3888759346355, 00:35:32.155 "min_latency_us": 3542.064761904762, 00:35:32.155 "max_latency_us": 11047.497142857143 00:35:32.155 } 00:35:32.155 ], 00:35:32.155 "core_count": 1 00:35:32.155 } 00:35:32.155 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:32.155 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:32.155 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:32.155 | .driver_specific 00:35:32.155 | .nvme_error 00:35:32.155 | .status_code 00:35:32.155 | .command_transient_transport_error' 00:35:32.155 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1198255 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1198255 ']' 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1198255 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1198255 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1198255' 00:35:32.413 killing process with pid 1198255 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1198255 00:35:32.413 Received shutdown signal, test time was about 2.000000 seconds 00:35:32.413 00:35:32.413 Latency(us) 00:35:32.413 [2024-12-14T15:49:02.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.413 [2024-12-14T15:49:02.499Z] =================================================================================================================== 00:35:32.413 [2024-12-14T15:49:02.499Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1198255 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1198886 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1198886 /var/tmp/bperf.sock 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1198886 ']' 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:32.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.413 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:32.672 [2024-12-14 16:49:02.533865] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:32.672 [2024-12-14 16:49:02.533912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198886 ] 00:35:32.672 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:32.672 Zero copy mechanism will not be used. 00:35:32.672 [2024-12-14 16:49:02.609819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.672 [2024-12-14 16:49:02.631948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.672 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.672 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:32.672 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:32.672 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:32.930 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:32.930 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.930 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:32.930 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.930 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:32.930 16:49:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:33.498 nvme0n1 00:35:33.498 16:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:33.498 16:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.498 16:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:33.498 16:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.498 16:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:33.498 16:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:33.498 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:33.498 Zero copy mechanism will not be used. 00:35:33.498 Running I/O for 2 seconds... 00:35:33.499 [2024-12-14 16:49:03.431944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.432040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.432067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.437755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.437827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.437848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.443426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.443512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.443531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.448823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.448879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.448898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.453991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.454051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.454070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.458672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.458747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.458766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.463077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.463149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.463168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.467365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.467420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.467438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.471514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.471591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.471609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.475730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.475793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.475811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.479854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.479919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.479937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.483950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.484005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.484024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.488055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.488121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.488142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.492156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.492216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.492234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.496457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.496504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.496522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.500797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.500900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.500921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.504908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.504956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.504975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.508985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.509046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.509065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.513088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.513142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.513160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.517211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.517262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.517281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.521331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.521392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.521410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.525449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.525511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.525529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.529553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.529623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.529641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.533675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.533728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.533746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.537793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.537853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.537871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.541923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.541992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.542011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.546036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.499 [2024-12-14 16:49:03.546097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.499 [2024-12-14 16:49:03.546115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.499 [2024-12-14 16:49:03.550105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.500 [2024-12-14 16:49:03.550169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.500 [2024-12-14 16:49:03.550186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.500 [2024-12-14 16:49:03.554159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.500 [2024-12-14 16:49:03.554228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.500 [2024-12-14 16:49:03.554247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.500 [2024-12-14 16:49:03.558261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.500 [2024-12-14 16:49:03.558317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.500 [2024-12-14 16:49:03.558335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.500 [2024-12-14 16:49:03.562394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.500 [2024-12-14 16:49:03.562449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.500 [2024-12-14 16:49:03.562467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.500 [2024-12-14 16:49:03.566568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.500 [2024-12-14 16:49:03.566647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.500 [2024-12-14 16:49:03.566665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.500 [2024-12-14 16:49:03.570673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.500 [2024-12-14 16:49:03.570737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.500 [2024-12-14 16:49:03.570756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.500 [2024-12-14 16:49:03.574756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.500 [2024-12-14 16:49:03.574809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.500 [2024-12-14 16:49:03.574827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.500 [2024-12-14 16:49:03.578919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.500 [2024-12-14 16:49:03.578972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.500 [2024-12-14 16:49:03.578991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.500 [2024-12-14 16:49:03.583083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.760 [2024-12-14 16:49:03.583138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.760 [2024-12-14 16:49:03.583157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.760 [2024-12-14 16:49:03.587251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.760 [2024-12-14 16:49:03.587311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.760 [2024-12-14 16:49:03.587329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.760 [2024-12-14 16:49:03.591373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.760 [2024-12-14 16:49:03.591423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.760 [2024-12-14 16:49:03.591441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.760 [2024-12-14 16:49:03.595466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.760 [2024-12-14 16:49:03.595520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.760 [2024-12-14 16:49:03.595542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.760 [2024-12-14 16:49:03.599618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.760 [2024-12-14 16:49:03.599675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.760 [2024-12-14 16:49:03.599694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.760 [2024-12-14 16:49:03.603701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.760 [2024-12-14 16:49:03.603762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.760 [2024-12-14 16:49:03.603781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.760 [2024-12-14 16:49:03.607834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.760 [2024-12-14 16:49:03.607893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.760 [2024-12-14 16:49:03.607911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.760 [2024-12-14 16:49:03.611997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.760 [2024-12-14 16:49:03.612061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.760 [2024-12-14 16:49:03.612079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.760 [2024-12-14 16:49:03.616135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.760 [2024-12-14 16:49:03.616191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.760 [2024-12-14 16:49:03.616209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.760 [2024-12-14 16:49:03.620720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.760 [2024-12-14 16:49:03.620788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.760 [2024-12-14 16:49:03.620806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.624983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.625044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.625063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.629123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.629184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.629203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.633420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.633476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.633494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.637521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.637575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.637594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.641755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.641817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.641836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.646132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.646197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.646215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.650539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.650622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.650640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.655239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.655301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.655319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.660102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.660163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.660180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.665585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.665637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.665655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.670268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.670320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.670339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.674726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.674789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.674807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.679079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.679169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.679187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.683783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.683854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.683872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.688622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.688691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.688709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.692891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.692956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.692975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.697083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.697187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.697206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.701488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.701546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.701570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.705692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.705758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.705776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.709922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.709975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.709996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.714380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.714449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.714467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.718516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.718585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.718603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.722817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.722870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.722887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.727655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.727719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.727736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.731905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.731957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.731975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.736471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.736545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.736569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.741393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.741457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.741476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.761 [2024-12-14 16:49:03.745994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.761 [2024-12-14 16:49:03.746060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.761 [2024-12-14 16:49:03.746078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.750978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.751031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.751049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.756209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.756258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.756276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.761805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.761863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.761881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.767205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.767277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.767294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.772414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.772470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.772489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.777137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.777202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.777220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.781882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.781971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.781989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.786742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.786799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.786817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.792145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.792201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.792218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.798078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.798175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.798193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.803180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.803236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.803254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.808304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.808354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.808372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.813441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.813534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.813552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.818644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.818781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.818799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.823812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.823954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.823973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.830169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.830227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.830245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.835002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.835057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.835075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.762 [2024-12-14 16:49:03.839829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:33.762 [2024-12-14 16:49:03.839913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.762 [2024-12-14 16:49:03.839935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.844428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.844520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.844538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.849627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.849721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.849740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.856205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.856384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.856402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.862547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.862633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.862651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.869640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.869773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.869791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.876944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.877078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.877096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.884347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.884479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.884497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.892330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.892472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.892490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.899123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.899272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.899290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.906114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.906274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.906293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.913147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.913286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.913304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.919992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.920141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.920159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.926499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.926665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.926683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.933435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.933602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.933621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.940603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.940739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.940757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.947651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.947793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.022 [2024-12-14 16:49:03.947812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.022 [2024-12-14 16:49:03.954602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.022 [2024-12-14 16:49:03.954761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:03.954779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:03.961157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:03.961333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:03.961352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:03.967788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:03.967972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:03.967991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:03.974358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:03.974409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:03.974427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:03.980348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:03.980459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:03.980477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:03.986093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:03.986173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:03.986191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:03.991345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:03.991435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:03.991454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:03.996598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:03.996771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:03.996792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.002687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.002869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.002887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.008954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.009116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.009137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.015409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.015590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.015608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.022656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.022803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.022821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.029610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.029711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.029730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.036878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.037030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.037050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.044325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.044415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.044434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.050875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.050992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.051010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.058208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.058304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.058322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.064866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.064924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.064942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.071582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.071659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.071678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.078642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.078703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.078720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.085981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.086112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.086131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.093776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.093913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.093931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.023 [2024-12-14 16:49:04.101323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.023 [2024-12-14 16:49:04.101459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.023 [2024-12-14 16:49:04.101478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.283 [2024-12-14 16:49:04.109375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.283 [2024-12-14 16:49:04.109466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.283 [2024-12-14 16:49:04.109485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.283 [2024-12-14 16:49:04.116908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.283 [2024-12-14 16:49:04.117048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.283 [2024-12-14 16:49:04.117066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.283 [2024-12-14 16:49:04.124041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.283 [2024-12-14 16:49:04.124099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.283 [2024-12-14 16:49:04.124117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.283 [2024-12-14 16:49:04.130664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.283 [2024-12-14 16:49:04.130745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.283 [2024-12-14 16:49:04.130764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.283 [2024-12-14 16:49:04.137154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.283 [2024-12-14 16:49:04.137237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.283 [2024-12-14 16:49:04.137255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.283 [2024-12-14 16:49:04.142741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.283 [2024-12-14 16:49:04.142793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.283 [2024-12-14 16:49:04.142811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.283 [2024-12-14 16:49:04.147516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.283 [2024-12-14 16:49:04.147578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.283 [2024-12-14 16:49:04.147596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.283 [2024-12-14 16:49:04.152579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.283 [2024-12-14 16:49:04.152661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.283 [2024-12-14 16:49:04.152679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.283 [2024-12-14 16:49:04.157467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.283 [2024-12-14 16:49:04.157532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.283 [2024-12-14 16:49:04.157550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.283 [2024-12-14 16:49:04.162110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.283 [2024-12-14 16:49:04.162165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.283 [2024-12-14 16:49:04.162183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.283 [2024-12-14 16:49:04.167517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.283 [2024-12-14 16:49:04.167618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.283 [2024-12-14 16:49:04.167637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.172322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.172378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.172395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.176789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.176880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.176902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.181477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.181613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.181631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.186657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.186712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.186730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.191542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.191615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.191634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.196315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.196382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.196400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.200985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.201041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.201059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.206076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.206126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.206144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.211097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.211181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.211200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.215922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.215974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.215991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.220763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.220857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.220875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.225736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.225794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.225812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.230655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.230718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.230735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.235706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.235767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.235785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.240443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.240520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.240539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.245040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.245094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.245113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.249786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.249919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.249936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.254756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.254897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.254914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.259514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.259598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.259616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.264223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.264276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.264294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.269084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.269148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.269166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.273997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.274080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.274099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.279090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.279157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.279175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.284820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.284896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.284913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.291183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.291262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.291280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.297955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.298137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.298155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.305231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.305290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.284 [2024-12-14 16:49:04.305308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.284 [2024-12-14 16:49:04.311807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.284 [2024-12-14 16:49:04.311864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.285 [2024-12-14 16:49:04.311885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.285 [2024-12-14 16:49:04.318691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.285 [2024-12-14 16:49:04.318794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.285 [2024-12-14 16:49:04.318811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.285 [2024-12-14 16:49:04.325990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.285 [2024-12-14 16:49:04.326304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.285 [2024-12-14 16:49:04.326325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.285 [2024-12-14 16:49:04.332499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.285 [2024-12-14 16:49:04.332799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.285 [2024-12-14 16:49:04.332819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.285 [2024-12-14 16:49:04.338803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.285 [2024-12-14 16:49:04.339063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.285 [2024-12-14 16:49:04.339083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.285 [2024-12-14 16:49:04.344153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.285 [2024-12-14 16:49:04.344415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.285 [2024-12-14 16:49:04.344435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.285 [2024-12-14 16:49:04.348880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.285 [2024-12-14 16:49:04.349149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.285 [2024-12-14 16:49:04.349169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.285 [2024-12-14 16:49:04.353724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.285 [2024-12-14 16:49:04.353997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.285 [2024-12-14 16:49:04.354016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.285 [2024-12-14 16:49:04.358701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.285 [2024-12-14 16:49:04.358965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.285 [2024-12-14 16:49:04.358985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.285 [2024-12-14 16:49:04.363732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.285 [2024-12-14 16:49:04.364011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.285 [2024-12-14 16:49:04.364031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.368184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.368463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.368482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.372647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.372933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.372953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.377244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.377504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.377524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.382054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.382309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.382328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.386704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.386975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.386994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.390985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.391255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.391275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.395421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.395692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.395711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.400454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.400734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.400754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.405106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.405369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.405388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.410261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.410526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.410546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.415149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.415406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.415425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.419675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.419945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.419964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.545 [2024-12-14 16:49:04.424595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.545 [2024-12-14 16:49:04.424855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.545 [2024-12-14 16:49:04.424875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.545 5916.00 IOPS, 739.50 MiB/s [2024-12-14T15:49:04.631Z] [2024-12-14 16:49:04.430340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.430596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.430615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.434673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.434947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.434965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.439262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.439535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.439555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.444038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.444330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.444350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.449196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.449491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.449513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.454180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.454458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.454478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.459097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.459370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.459388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.463709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.463978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.463998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.468134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.468410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.468430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.472350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.472628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.472647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.476620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.476896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.476916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.480703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.480979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.480998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.485003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.485266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.485286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.489313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.489581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.489600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.493477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.493754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.493774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.497734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.498000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.498020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.501736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.502005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.502025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.505770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.506046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.506066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.509824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.510094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.510113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.513888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.514158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.514177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.517903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.518174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.518196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.521889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.522162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.522181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.525920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.526191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.526211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.529962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.530229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.530248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.533953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.534221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.534241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.537963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.538227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.538246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.542154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.542419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.542439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.546450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.546 [2024-12-14 16:49:04.546719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.546 [2024-12-14 16:49:04.546739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.546 [2024-12-14 16:49:04.551199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.551407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.551427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.555794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.556055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.556077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.560109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.560363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.560382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.564501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.564762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.564781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.569566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.569811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.569830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.574267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.574508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.574528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.578966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.579219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.579238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.583926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.584184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.584204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.588213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.588472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.588491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.592479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.592743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.592762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.596487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.596753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.596773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.600631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.600876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.600896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.604843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.605100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.605119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.608861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.609119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.609138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.612897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.613156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.613176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.617076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.617326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.617346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.621776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.621909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.621927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.547 [2024-12-14 16:49:04.626336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.547 [2024-12-14 16:49:04.626521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.547 [2024-12-14 16:49:04.626539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.807 [2024-12-14 16:49:04.630439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.807 [2024-12-14 16:49:04.630615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.807 [2024-12-14 16:49:04.630638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.807 [2024-12-14 16:49:04.634375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.807 [2024-12-14 16:49:04.634550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.807 [2024-12-14 16:49:04.634575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.807 [2024-12-14 16:49:04.638214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.807 [2024-12-14 16:49:04.638384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.807 [2024-12-14 16:49:04.638402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.807 [2024-12-14 16:49:04.641899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.807 [2024-12-14 16:49:04.642066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.807 [2024-12-14 16:49:04.642084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.807 [2024-12-14 16:49:04.645451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.807 [2024-12-14 16:49:04.645612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.807 [2024-12-14 16:49:04.645630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.807 [2024-12-14 16:49:04.649144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.649302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.649321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.653244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.653390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.653408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.657573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.657716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.657735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.661717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.661884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.661902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.665839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.666008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.666026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.670188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.670334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.670353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.674426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.674597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.674615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.678431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.678582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.678600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.682064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.682226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.682245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.685839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.686002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.686020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.689680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.689840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.689860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.693408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.693572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.693591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.697112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.697262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.697281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.700815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.700958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.700977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.704614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.704776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.704796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.708871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.709011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.709029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.713255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.713395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.713414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.717645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.717808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.717827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.721418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.721590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.721609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.725181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.725348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.725366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.728849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.729003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.729021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.732534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.732728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.732750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.736484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.736628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.736646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.740517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.740673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.740692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.744873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.745011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.745029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.749290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.749481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.749498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.754243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.754406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.808 [2024-12-14 16:49:04.754426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.808 [2024-12-14 16:49:04.758500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.808 [2024-12-14 16:49:04.758670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.758688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.762517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.762682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.762700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.767371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.767536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.767561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.772748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.772888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.772906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.777298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.777447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.777465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.781162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.781321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.781339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.784790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.784951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.784969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.788404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.788575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.788594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.792009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.792175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.792193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.795612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.795776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.795794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.799177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.799342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.799360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.802771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.802930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.802948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.806344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.806497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.806516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.809965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.810121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.810139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.813524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.813686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.813705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.817114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.817275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.817293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.820686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.820846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.820866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.824223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.824382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.824400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.827828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.827985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.828003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.831394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.831540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.831565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.834971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.835134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.835155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.838699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.838855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.838874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.842892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.843046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.843066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.847064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.847214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.847232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.850838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.850979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.850998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.854543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.854709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.854727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.858291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.858450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.858468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.862014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.862175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.809 [2024-12-14 16:49:04.862193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.809 [2024-12-14 16:49:04.865787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.809 [2024-12-14 16:49:04.865946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.810 [2024-12-14 16:49:04.865964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.810 [2024-12-14 16:49:04.869593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.810 [2024-12-14 16:49:04.869752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.810 [2024-12-14 16:49:04.869771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.810 [2024-12-14 16:49:04.873632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.810 [2024-12-14 16:49:04.873795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.810 [2024-12-14 16:49:04.873815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.810 [2024-12-14 16:49:04.877315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.810 [2024-12-14 16:49:04.877479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.810 [2024-12-14 16:49:04.877499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.810 [2024-12-14 16:49:04.880952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.810 [2024-12-14 16:49:04.881114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.810 [2024-12-14 16:49:04.881133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.810 [2024-12-14 16:49:04.884575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.810 [2024-12-14 16:49:04.884755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.810 [2024-12-14 16:49:04.884774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.810 [2024-12-14 16:49:04.888222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:34.810 [2024-12-14 16:49:04.888386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.810 [2024-12-14 16:49:04.888404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.891970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.892138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.070 [2024-12-14 16:49:04.892156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.896037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.896186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.070 [2024-12-14 16:49:04.896205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.900636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.900796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.070 [2024-12-14 16:49:04.900820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.905512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.905671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.070 [2024-12-14 16:49:04.905690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.910059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.910221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.070 [2024-12-14 16:49:04.910241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.914636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.914789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.070 [2024-12-14 16:49:04.914809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.918911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.919094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.070 [2024-12-14 16:49:04.919113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.923602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.923750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.070 [2024-12-14 16:49:04.923769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.928152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.928317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.070 [2024-12-14 16:49:04.928337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.932386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.932563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.070 [2024-12-14 16:49:04.932586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.936820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.936977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.070 [2024-12-14 16:49:04.936997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.070 [2024-12-14 16:49:04.941329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.070 [2024-12-14 16:49:04.941494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.941519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.946183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.946323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.946345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.950618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.950769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.950788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.954787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.954956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.954977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.958658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.958827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.958849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.962350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.962516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.962535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.966032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.966199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.966218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.969784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.969959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.969979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.973623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.973789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.973808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.977359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.977526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.977545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.981111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.981267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.981286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.985108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.985275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.985295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.990047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.990187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.990206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:04.994949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:04.995160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:04.995180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.000449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.000632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.000651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.006812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.006987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.007006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.011835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.012020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.012039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.016169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.016398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.016418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.020374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.020543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.020568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.024321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.024486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.024505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.028244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.028401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.028419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.033306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.033585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.033605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.038104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.038268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.038287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.042253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.042524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.042544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.047355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.047608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.047628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.052410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.052617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.052635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.057850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.057973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.057994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.064192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.071 [2024-12-14 16:49:05.064354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.071 [2024-12-14 16:49:05.064372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.071 [2024-12-14 16:49:05.069501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.069640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.069659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.073675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.073767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.073785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.077550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.077674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.077691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.081243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.081396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.081414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.085012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.085166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.085184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.089514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.089644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.089661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.093256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.093385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.093403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.097107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.097232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.097252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.101400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.101496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.101514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.105843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.105973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.105991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.109978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.110154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.110173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.114189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.114285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.114303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.118620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.118714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.118732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.122886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.123020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.123038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.127276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.127388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.127406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.131702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.131818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.131836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.136055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.136148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.136167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.140096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.140199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.140217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.144028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.144178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.144196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.147861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.148003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.148021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.072 [2024-12-14 16:49:05.152327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.072 [2024-12-14 16:49:05.152458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.072 [2024-12-14 16:49:05.152476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.331 [2024-12-14 16:49:05.156889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.331 [2024-12-14 16:49:05.157024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.331 [2024-12-14 16:49:05.157042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.331 [2024-12-14 16:49:05.160717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.331 [2024-12-14 16:49:05.160845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.331 [2024-12-14 16:49:05.160863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.331 [2024-12-14 16:49:05.164448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.331 [2024-12-14 16:49:05.164578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.331 [2024-12-14 16:49:05.164597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.331 [2024-12-14 16:49:05.167998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.331 [2024-12-14 16:49:05.168115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.331 [2024-12-14 16:49:05.168139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.331 [2024-12-14 16:49:05.171640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.331 [2024-12-14 16:49:05.171758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.331 [2024-12-14 16:49:05.171776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.331 [2024-12-14 16:49:05.175361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.331 [2024-12-14 16:49:05.175490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.331 [2024-12-14 16:49:05.175507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.331 [2024-12-14 16:49:05.179003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.331 [2024-12-14 16:49:05.179126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.331 [2024-12-14 16:49:05.179144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.182696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.182838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.182855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.186460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.186606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.186623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.190522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.190661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.190679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.194248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.194360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.194377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.197904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.198040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.198058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.201638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.201775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.201797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.205588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.205714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.205732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.209399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.209537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.209562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.213223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.213337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.213357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.216951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.217077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.217096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.220609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.220745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.220764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.224285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.224407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.224427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.227913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.228050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.228067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.231617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.231753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.231771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.235195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.235332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.235350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.238826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.238959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.238977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.243211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.243265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.243282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.247428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.247546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.247569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.251318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.251457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.251475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.255047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.255164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.255183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.258914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.259058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.259075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.262530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.262649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.262667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.266310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.266439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.266460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.270502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.270626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.270644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.274910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.275047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.275065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.332 [2024-12-14 16:49:05.278971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.332 [2024-12-14 16:49:05.279112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.332 [2024-12-14 16:49:05.279129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.283449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.283575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.283594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.288268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.288375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.288392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.292216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.292352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.292370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.296028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.296171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.296189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.299769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.299909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.299927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.303581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.303694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.303715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.307394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.307516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.307533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.311118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.311232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.311249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.314885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.315004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.315022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.318676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.318798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.318815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.322371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.322494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.322511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.326285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.326417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.326435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.330314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.330444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.330462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.334096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.334215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.334232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.337921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.338051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.338069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.341781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.341902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.341920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.346517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.346773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.346793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.351933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.352066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.352084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.356959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.357140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.357158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.361984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.362086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.362104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.366046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.366118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.366137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.371116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.371245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.371263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.376068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.376191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.376213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.380939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.381091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.381109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.386741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.386904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.386924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.392154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.392304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.333 [2024-12-14 16:49:05.392322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.333 [2024-12-14 16:49:05.398722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.333 [2024-12-14 16:49:05.398915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.334 [2024-12-14 16:49:05.398934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.334 [2024-12-14 16:49:05.405427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.334 [2024-12-14 16:49:05.405604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.334 [2024-12-14 16:49:05.405622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.334 [2024-12-14 16:49:05.412852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.334 [2024-12-14 16:49:05.413051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.334 [2024-12-14 16:49:05.413070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:35.592 [2024-12-14 16:49:05.419468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.592 [2024-12-14 16:49:05.419591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.592 [2024-12-14 16:49:05.419609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:35.592 [2024-12-14 16:49:05.425279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.592 [2024-12-14 16:49:05.425372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.592 [2024-12-14 16:49:05.425390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:35.592 [2024-12-14 16:49:05.431197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d4d100) with pdu=0x200016eff3c8 00:35:35.592 [2024-12-14 16:49:05.432495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.592 [2024-12-14 16:49:05.432519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:35.592 6634.00 IOPS, 829.25 MiB/s 00:35:35.592 Latency(us) 00:35:35.592 [2024-12-14T15:49:05.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.592 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:35.592 nvme0n1 : 2.00 6630.31 828.79 0.00 0.00 2408.75 1685.21 8675.72 00:35:35.592 [2024-12-14T15:49:05.678Z] =================================================================================================================== 00:35:35.592 [2024-12-14T15:49:05.678Z] Total : 6630.31 828.79 0.00 0.00 2408.75 1685.21 8675.72 00:35:35.592 { 00:35:35.592 "results": [ 00:35:35.592 { 00:35:35.592 "job": "nvme0n1", 00:35:35.592 "core_mask": "0x2", 00:35:35.592 "workload": "randwrite", 00:35:35.592 "status": "finished", 00:35:35.592 "queue_depth": 16, 00:35:35.592 "io_size": 131072, 00:35:35.592 "runtime": 2.003525, 00:35:35.592 "iops": 6630.314071449071, 00:35:35.592 "mibps": 828.7892589311339, 00:35:35.592 "io_failed": 0, 00:35:35.592 "io_timeout": 0, 00:35:35.592 "avg_latency_us": 2408.7541090606674, 00:35:35.592 "min_latency_us": 1685.2114285714285, 00:35:35.592 "max_latency_us": 8675.718095238095 00:35:35.592 } 00:35:35.592 ], 00:35:35.592 "core_count": 1 00:35:35.592 } 00:35:35.592 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:35.592 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:35.592 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:35.592 | .driver_specific 00:35:35.592 | .nvme_error 00:35:35.592 | .status_code 00:35:35.592 | .command_transient_transport_error' 00:35:35.592 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:35.592 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 429 > 0 )) 00:35:35.592 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1198886 00:35:35.592 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1198886 ']' 00:35:35.592 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1198886 00:35:35.592 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:35.592 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:35.592 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1198886 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1198886' 00:35:35.851 killing process with pid 1198886 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1198886 00:35:35.851 Received shutdown signal, test time was about 2.000000 seconds 00:35:35.851 00:35:35.851 Latency(us) 00:35:35.851 [2024-12-14T15:49:05.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.851 [2024-12-14T15:49:05.937Z] =================================================================================================================== 00:35:35.851 [2024-12-14T15:49:05.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1198886 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1197190 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197190 ']' 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197190 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197190 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197190' 00:35:35.851 killing process with pid 1197190 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197190 00:35:35.851 16:49:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197190 00:35:36.110 00:35:36.110 real 0m14.013s 00:35:36.110 user 0m26.883s 00:35:36.110 sys 0m4.499s 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:36.110 ************************************ 00:35:36.110 END TEST nvmf_digest_error 00:35:36.110 ************************************ 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:36.110 rmmod nvme_tcp 00:35:36.110 rmmod nvme_fabrics 00:35:36.110 rmmod nvme_keyring 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1197190 ']' 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1197190 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1197190 ']' 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1197190 00:35:36.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1197190) - No such process 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1197190 is not found' 00:35:36.110 Process with pid 1197190 is not found 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:36.110 16:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:38.645 00:35:38.645 real 0m36.373s 00:35:38.645 user 0m55.631s 00:35:38.645 sys 0m13.523s 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:38.645 ************************************ 00:35:38.645 END TEST nvmf_digest 00:35:38.645 ************************************ 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.645 ************************************ 00:35:38.645 START TEST nvmf_bdevperf 00:35:38.645 ************************************ 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:38.645 * Looking for test storage... 00:35:38.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.645 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:38.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.646 --rc genhtml_branch_coverage=1 00:35:38.646 --rc genhtml_function_coverage=1 00:35:38.646 --rc genhtml_legend=1 00:35:38.646 --rc geninfo_all_blocks=1 00:35:38.646 --rc geninfo_unexecuted_blocks=1 00:35:38.646 00:35:38.646 ' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:38.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.646 --rc genhtml_branch_coverage=1 00:35:38.646 --rc genhtml_function_coverage=1 00:35:38.646 --rc genhtml_legend=1 00:35:38.646 --rc geninfo_all_blocks=1 00:35:38.646 --rc geninfo_unexecuted_blocks=1 00:35:38.646 00:35:38.646 ' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:38.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.646 --rc genhtml_branch_coverage=1 00:35:38.646 --rc genhtml_function_coverage=1 00:35:38.646 --rc genhtml_legend=1 00:35:38.646 --rc geninfo_all_blocks=1 00:35:38.646 --rc geninfo_unexecuted_blocks=1 00:35:38.646 00:35:38.646 ' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:38.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.646 --rc genhtml_branch_coverage=1 00:35:38.646 --rc genhtml_function_coverage=1 00:35:38.646 --rc genhtml_legend=1 00:35:38.646 --rc geninfo_all_blocks=1 00:35:38.646 --rc geninfo_unexecuted_blocks=1 00:35:38.646 00:35:38.646 ' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:38.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:38.646 16:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:44.044 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:44.044 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:44.044 Found net devices under 0000:af:00.0: cvl_0_0 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:44.044 Found net devices under 0000:af:00.1: cvl_0_1 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:44.044 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:44.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:44.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:35:44.303 00:35:44.303 --- 10.0.0.2 ping statistics --- 00:35:44.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.303 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:44.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:44.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:35:44.303 00:35:44.303 --- 10.0.0.1 ping statistics --- 00:35:44.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.303 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:44.303 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:44.304 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:44.304 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:44.304 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1202835 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1202835 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1202835 ']' 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.562 [2024-12-14 16:49:14.455493] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:44.562 [2024-12-14 16:49:14.455539] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.562 [2024-12-14 16:49:14.532331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:44.562 [2024-12-14 16:49:14.554529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.562 [2024-12-14 16:49:14.554572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.562 [2024-12-14 16:49:14.554579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.562 [2024-12-14 16:49:14.554585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.562 [2024-12-14 16:49:14.554591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.562 [2024-12-14 16:49:14.555905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:44.562 [2024-12-14 16:49:14.556009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.562 [2024-12-14 16:49:14.556011] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:44.562 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.821 [2024-12-14 16:49:14.682494] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.821 Malloc0 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.821 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:44.821 [2024-12-14 16:49:14.752115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:44.822 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.822 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:44.822 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:44.822 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:44.822 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:44.822 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:44.822 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:44.822 { 00:35:44.822 "params": { 00:35:44.822 "name": "Nvme$subsystem", 00:35:44.822 "trtype": "$TEST_TRANSPORT", 00:35:44.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:44.822 "adrfam": "ipv4", 00:35:44.822 "trsvcid": "$NVMF_PORT", 00:35:44.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:44.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:44.822 "hdgst": ${hdgst:-false}, 00:35:44.822 "ddgst": ${ddgst:-false} 00:35:44.822 }, 00:35:44.822 "method": "bdev_nvme_attach_controller" 00:35:44.822 } 00:35:44.822 EOF 00:35:44.822 )") 00:35:44.822 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:44.822 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:44.822 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:44.822 16:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:44.822 "params": { 00:35:44.822 "name": "Nvme1", 00:35:44.822 "trtype": "tcp", 00:35:44.822 "traddr": "10.0.0.2", 00:35:44.822 "adrfam": "ipv4", 00:35:44.822 "trsvcid": "4420", 00:35:44.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:44.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:44.822 "hdgst": false, 00:35:44.822 "ddgst": false 00:35:44.822 }, 00:35:44.822 "method": "bdev_nvme_attach_controller" 00:35:44.822 }' 00:35:44.822 [2024-12-14 16:49:14.804527] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:44.822 [2024-12-14 16:49:14.804575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202864 ] 00:35:44.822 [2024-12-14 16:49:14.880884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.822 [2024-12-14 16:49:14.903466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.080 Running I/O for 1 seconds... 00:35:46.455 11213.00 IOPS, 43.80 MiB/s 00:35:46.455 Latency(us) 00:35:46.455 [2024-12-14T15:49:16.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.455 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:46.455 Verification LBA range: start 0x0 length 0x4000 00:35:46.455 Nvme1n1 : 1.05 10845.60 42.37 0.00 0.00 11337.37 2418.59 43690.67 00:35:46.455 [2024-12-14T15:49:16.541Z] =================================================================================================================== 00:35:46.455 [2024-12-14T15:49:16.541Z] Total : 10845.60 42.37 0.00 0.00 11337.37 2418.59 43690.67 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1203146 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:46.455 { 00:35:46.455 "params": { 00:35:46.455 "name": "Nvme$subsystem", 00:35:46.455 "trtype": "$TEST_TRANSPORT", 00:35:46.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.455 "adrfam": "ipv4", 00:35:46.455 "trsvcid": "$NVMF_PORT", 00:35:46.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.455 "hdgst": ${hdgst:-false}, 00:35:46.455 "ddgst": ${ddgst:-false} 00:35:46.455 }, 00:35:46.455 "method": "bdev_nvme_attach_controller" 00:35:46.455 } 00:35:46.455 EOF 00:35:46.455 )") 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:46.455 16:49:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:46.455 "params": { 00:35:46.455 "name": "Nvme1", 00:35:46.455 "trtype": "tcp", 00:35:46.455 "traddr": "10.0.0.2", 00:35:46.455 "adrfam": "ipv4", 00:35:46.455 "trsvcid": "4420", 00:35:46.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:46.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:46.455 "hdgst": false, 00:35:46.455 "ddgst": false 00:35:46.455 }, 00:35:46.455 "method": "bdev_nvme_attach_controller" 00:35:46.455 }' 00:35:46.455 [2024-12-14 16:49:16.347353] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:46.455 [2024-12-14 16:49:16.347405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203146 ] 00:35:46.455 [2024-12-14 16:49:16.420831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.455 [2024-12-14 16:49:16.442656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.713 Running I/O for 15 seconds... 00:35:48.578 11536.00 IOPS, 45.06 MiB/s [2024-12-14T15:49:19.600Z] 11529.50 IOPS, 45.04 MiB/s [2024-12-14T15:49:19.600Z] 16:49:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1202835 00:35:49.514 16:49:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:49.514 [2024-12-14 16:49:19.318387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.514 [2024-12-14 16:49:19.318426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.514 [2024-12-14 16:49:19.318443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.514 [2024-12-14 16:49:19.318456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.515 [2024-12-14 16:49:19.318793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.515 [2024-12-14 16:49:19.318814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.515 [2024-12-14 16:49:19.318831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.515 [2024-12-14 16:49:19.318849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.515 [2024-12-14 16:49:19.318867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.515 [2024-12-14 16:49:19.318885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.515 [2024-12-14 16:49:19.318900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.318987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.318995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.319002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.319010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.319016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.319024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.319031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.319039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.319046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.319054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.319061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.319069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.319075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.319083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.319091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.515 [2024-12-14 16:49:19.319098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.515 [2024-12-14 16:49:19.319105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.516 [2024-12-14 16:49:19.319121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.516 [2024-12-14 16:49:19.319254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.516 [2024-12-14 16:49:19.319804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.516 [2024-12-14 16:49:19.319813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.319820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.319835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.517 [2024-12-14 16:49:19.319849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.517 [2024-12-14 16:49:19.319863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.517 [2024-12-14 16:49:19.319879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.517 [2024-12-14 16:49:19.319893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.517 [2024-12-14 16:49:19.319907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.319922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.319938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.319953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.319967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.319981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.319989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.319996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.517 [2024-12-14 16:49:19.320403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.517 [2024-12-14 16:49:19.320411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.518 [2024-12-14 16:49:19.320417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.518 [2024-12-14 16:49:19.320425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.518 [2024-12-14 16:49:19.320432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.518 [2024-12-14 16:49:19.320441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.518 [2024-12-14 16:49:19.320447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.518 [2024-12-14 16:49:19.320455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.518 [2024-12-14 16:49:19.320461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.518 [2024-12-14 16:49:19.320469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.518 [2024-12-14 16:49:19.320475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.518 [2024-12-14 16:49:19.320483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.518 [2024-12-14 16:49:19.320490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.518 [2024-12-14 16:49:19.320497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6920 is same with the state(6) to be set 00:35:49.518 [2024-12-14 16:49:19.320508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:49.518 [2024-12-14 16:49:19.320513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:49.518 [2024-12-14 16:49:19.320519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120440 len:8 PRP1 0x0 PRP2 0x0 00:35:49.518 [2024-12-14 16:49:19.320527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.518 [2024-12-14 16:49:19.320611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.518 [2024-12-14 16:49:19.320622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.518 [2024-12-14 16:49:19.320630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.518 [2024-12-14 16:49:19.320636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.518 [2024-12-14 16:49:19.320643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.518 [2024-12-14 16:49:19.320649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.518 [2024-12-14 16:49:19.320656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.518 [2024-12-14 16:49:19.320663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.518 [2024-12-14 16:49:19.320669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.518 [2024-12-14 16:49:19.323443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.518 [2024-12-14 16:49:19.323469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.518 [2024-12-14 16:49:19.323945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.518 [2024-12-14 16:49:19.323962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.518 [2024-12-14 16:49:19.323970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.518 [2024-12-14 16:49:19.324146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.518 [2024-12-14 16:49:19.324322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.518 [2024-12-14 16:49:19.324330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.518 [2024-12-14 16:49:19.324338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.518 [2024-12-14 16:49:19.324346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.518 [2024-12-14 16:49:19.336732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.518 [2024-12-14 16:49:19.337151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.518 [2024-12-14 16:49:19.337169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.518 [2024-12-14 16:49:19.337178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.518 [2024-12-14 16:49:19.337352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.518 [2024-12-14 16:49:19.337530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.518 [2024-12-14 16:49:19.337540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.518 [2024-12-14 16:49:19.337548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.518 [2024-12-14 16:49:19.337563] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.518 [2024-12-14 16:49:19.349656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.518 [2024-12-14 16:49:19.349942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.518 [2024-12-14 16:49:19.349960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.518 [2024-12-14 16:49:19.349968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.518 [2024-12-14 16:49:19.350137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.518 [2024-12-14 16:49:19.350305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.518 [2024-12-14 16:49:19.350315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.518 [2024-12-14 16:49:19.350322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.518 [2024-12-14 16:49:19.350328] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.518 [2024-12-14 16:49:19.362725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.518 [2024-12-14 16:49:19.363063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.518 [2024-12-14 16:49:19.363109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.518 [2024-12-14 16:49:19.363134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.518 [2024-12-14 16:49:19.363734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.518 [2024-12-14 16:49:19.363924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.518 [2024-12-14 16:49:19.363934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.518 [2024-12-14 16:49:19.363940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.518 [2024-12-14 16:49:19.363947] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.518 [2024-12-14 16:49:19.375660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.518 [2024-12-14 16:49:19.375937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.518 [2024-12-14 16:49:19.375954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.518 [2024-12-14 16:49:19.375962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.518 [2024-12-14 16:49:19.376122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.518 [2024-12-14 16:49:19.376282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.518 [2024-12-14 16:49:19.376291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.518 [2024-12-14 16:49:19.376301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.518 [2024-12-14 16:49:19.376308] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.518 [2024-12-14 16:49:19.388625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.518 [2024-12-14 16:49:19.388950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.518 [2024-12-14 16:49:19.388966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.518 [2024-12-14 16:49:19.388974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.518 [2024-12-14 16:49:19.389134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.518 [2024-12-14 16:49:19.389294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.518 [2024-12-14 16:49:19.389304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.518 [2024-12-14 16:49:19.389310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.518 [2024-12-14 16:49:19.389317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.518 [2024-12-14 16:49:19.401430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.518 [2024-12-14 16:49:19.401722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.518 [2024-12-14 16:49:19.401772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.518 [2024-12-14 16:49:19.401796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.518 [2024-12-14 16:49:19.402314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.518 [2024-12-14 16:49:19.402475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.518 [2024-12-14 16:49:19.402484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.518 [2024-12-14 16:49:19.402490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.519 [2024-12-14 16:49:19.402497] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.519 [2024-12-14 16:49:19.414422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.519 [2024-12-14 16:49:19.414845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.519 [2024-12-14 16:49:19.414864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.519 [2024-12-14 16:49:19.414872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.519 [2024-12-14 16:49:19.415041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.519 [2024-12-14 16:49:19.415210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.519 [2024-12-14 16:49:19.415219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.519 [2024-12-14 16:49:19.415226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.519 [2024-12-14 16:49:19.415233] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.519 [2024-12-14 16:49:19.427361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.519 [2024-12-14 16:49:19.427653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.519 [2024-12-14 16:49:19.427671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.519 [2024-12-14 16:49:19.427679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.519 [2024-12-14 16:49:19.427849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.519 [2024-12-14 16:49:19.428018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.519 [2024-12-14 16:49:19.428027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.519 [2024-12-14 16:49:19.428033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.519 [2024-12-14 16:49:19.428040] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.519 [2024-12-14 16:49:19.440209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.519 [2024-12-14 16:49:19.440596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.519 [2024-12-14 16:49:19.440614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.519 [2024-12-14 16:49:19.440622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.519 [2024-12-14 16:49:19.440782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.519 [2024-12-14 16:49:19.440943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.519 [2024-12-14 16:49:19.440952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.519 [2024-12-14 16:49:19.440958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.519 [2024-12-14 16:49:19.440964] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.519 [2024-12-14 16:49:19.453127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.519 [2024-12-14 16:49:19.453460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.519 [2024-12-14 16:49:19.453476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.519 [2024-12-14 16:49:19.453484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.519 [2024-12-14 16:49:19.453651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.519 [2024-12-14 16:49:19.453811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.519 [2024-12-14 16:49:19.453820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.519 [2024-12-14 16:49:19.453826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.519 [2024-12-14 16:49:19.453832] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.519 [2024-12-14 16:49:19.465991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.519 [2024-12-14 16:49:19.466406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.519 [2024-12-14 16:49:19.466450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.519 [2024-12-14 16:49:19.466481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.519 [2024-12-14 16:49:19.467083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.519 [2024-12-14 16:49:19.467531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.519 [2024-12-14 16:49:19.467540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.519 [2024-12-14 16:49:19.467546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.519 [2024-12-14 16:49:19.467553] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.519 [2024-12-14 16:49:19.478738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.519 [2024-12-14 16:49:19.479159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.519 [2024-12-14 16:49:19.479203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.519 [2024-12-14 16:49:19.479227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.519 [2024-12-14 16:49:19.479828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.519 [2024-12-14 16:49:19.480298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.519 [2024-12-14 16:49:19.480307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.519 [2024-12-14 16:49:19.480314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.519 [2024-12-14 16:49:19.480320] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.519 [2024-12-14 16:49:19.491528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.519 [2024-12-14 16:49:19.491891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.519 [2024-12-14 16:49:19.491935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.519 [2024-12-14 16:49:19.491958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.519 [2024-12-14 16:49:19.492453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.519 [2024-12-14 16:49:19.492619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.519 [2024-12-14 16:49:19.492628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.519 [2024-12-14 16:49:19.492635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.519 [2024-12-14 16:49:19.492641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.519 [2024-12-14 16:49:19.504309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.519 [2024-12-14 16:49:19.504721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.519 [2024-12-14 16:49:19.504738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.519 [2024-12-14 16:49:19.504746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.519 [2024-12-14 16:49:19.504905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.519 [2024-12-14 16:49:19.505067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.519 [2024-12-14 16:49:19.505077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.519 [2024-12-14 16:49:19.505083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.519 [2024-12-14 16:49:19.505089] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.519 [2024-12-14 16:49:19.517164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.519 [2024-12-14 16:49:19.517491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.519 [2024-12-14 16:49:19.517507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.520 [2024-12-14 16:49:19.517515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.520 [2024-12-14 16:49:19.517702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.520 [2024-12-14 16:49:19.517872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.520 [2024-12-14 16:49:19.517881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.520 [2024-12-14 16:49:19.517887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.520 [2024-12-14 16:49:19.517894] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.520 [2024-12-14 16:49:19.529912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.520 [2024-12-14 16:49:19.530341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.520 [2024-12-14 16:49:19.530358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.520 [2024-12-14 16:49:19.530366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.520 [2024-12-14 16:49:19.530525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.520 [2024-12-14 16:49:19.530712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.520 [2024-12-14 16:49:19.530722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.520 [2024-12-14 16:49:19.530728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.520 [2024-12-14 16:49:19.530735] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.520 [2024-12-14 16:49:19.542703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.520 [2024-12-14 16:49:19.543125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.520 [2024-12-14 16:49:19.543170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.520 [2024-12-14 16:49:19.543194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.520 [2024-12-14 16:49:19.543793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.520 [2024-12-14 16:49:19.544380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.520 [2024-12-14 16:49:19.544406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.520 [2024-12-14 16:49:19.544436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.520 [2024-12-14 16:49:19.544457] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.520 [2024-12-14 16:49:19.555495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.520 [2024-12-14 16:49:19.555888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.520 [2024-12-14 16:49:19.555905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.520 [2024-12-14 16:49:19.555912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.520 [2024-12-14 16:49:19.556072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.520 [2024-12-14 16:49:19.556231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.520 [2024-12-14 16:49:19.556241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.520 [2024-12-14 16:49:19.556247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.520 [2024-12-14 16:49:19.556253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.520 [2024-12-14 16:49:19.568231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.520 [2024-12-14 16:49:19.568616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.520 [2024-12-14 16:49:19.568652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.520 [2024-12-14 16:49:19.568679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.520 [2024-12-14 16:49:19.569264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.520 [2024-12-14 16:49:19.569551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.520 [2024-12-14 16:49:19.569578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.520 [2024-12-14 16:49:19.569593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.520 [2024-12-14 16:49:19.569608] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.520 [2024-12-14 16:49:19.583259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.520 [2024-12-14 16:49:19.583685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.520 [2024-12-14 16:49:19.583709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.520 [2024-12-14 16:49:19.583720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.520 [2024-12-14 16:49:19.583976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.520 [2024-12-14 16:49:19.584232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.520 [2024-12-14 16:49:19.584246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.520 [2024-12-14 16:49:19.584256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.520 [2024-12-14 16:49:19.584266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.780 [2024-12-14 16:49:19.596351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.780 [2024-12-14 16:49:19.596756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.780 [2024-12-14 16:49:19.596775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.780 [2024-12-14 16:49:19.596782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.780 [2024-12-14 16:49:19.596956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.781 [2024-12-14 16:49:19.597130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.781 [2024-12-14 16:49:19.597139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.781 [2024-12-14 16:49:19.597146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.781 [2024-12-14 16:49:19.597153] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.781 10451.00 IOPS, 40.82 MiB/s [2024-12-14T15:49:19.867Z] [2024-12-14 16:49:19.610721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.781 [2024-12-14 16:49:19.611146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-12-14 16:49:19.611191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.781 [2024-12-14 16:49:19.611215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.781 [2024-12-14 16:49:19.611725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.781 [2024-12-14 16:49:19.611895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.781 [2024-12-14 16:49:19.611905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.781 [2024-12-14 16:49:19.611911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.781 [2024-12-14 16:49:19.611918] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.781 [2024-12-14 16:49:19.623540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.781 [2024-12-14 16:49:19.623960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-12-14 16:49:19.624006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.781 [2024-12-14 16:49:19.624030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.781 [2024-12-14 16:49:19.624409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.781 [2024-12-14 16:49:19.624577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.781 [2024-12-14 16:49:19.624603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.781 [2024-12-14 16:49:19.624610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.781 [2024-12-14 16:49:19.624617] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.781 [2024-12-14 16:49:19.636277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.781 [2024-12-14 16:49:19.636619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-12-14 16:49:19.636636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.781 [2024-12-14 16:49:19.636648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.781 [2024-12-14 16:49:19.636809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.781 [2024-12-14 16:49:19.636969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.781 [2024-12-14 16:49:19.636978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.781 [2024-12-14 16:49:19.636984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.781 [2024-12-14 16:49:19.636990] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.781 [2024-12-14 16:49:19.649214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.781 [2024-12-14 16:49:19.649621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-12-14 16:49:19.649651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.781 [2024-12-14 16:49:19.649676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.781 [2024-12-14 16:49:19.650260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.781 [2024-12-14 16:49:19.650771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.781 [2024-12-14 16:49:19.650780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.781 [2024-12-14 16:49:19.650787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.781 [2024-12-14 16:49:19.650793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.781 [2024-12-14 16:49:19.662065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.781 [2024-12-14 16:49:19.662487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-12-14 16:49:19.662531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.781 [2024-12-14 16:49:19.662571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.781 [2024-12-14 16:49:19.663159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.781 [2024-12-14 16:49:19.663754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.781 [2024-12-14 16:49:19.663781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.781 [2024-12-14 16:49:19.663803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.781 [2024-12-14 16:49:19.663823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.781 [2024-12-14 16:49:19.675024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.781 [2024-12-14 16:49:19.675437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-12-14 16:49:19.675479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.781 [2024-12-14 16:49:19.675505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.781 [2024-12-14 16:49:19.676104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.781 [2024-12-14 16:49:19.676710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.781 [2024-12-14 16:49:19.676737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.781 [2024-12-14 16:49:19.676758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.781 [2024-12-14 16:49:19.676790] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.781 [2024-12-14 16:49:19.687789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.781 [2024-12-14 16:49:19.688176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-12-14 16:49:19.688216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.781 [2024-12-14 16:49:19.688241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.781 [2024-12-14 16:49:19.688809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.781 [2024-12-14 16:49:19.688980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.781 [2024-12-14 16:49:19.688989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.781 [2024-12-14 16:49:19.688996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.781 [2024-12-14 16:49:19.689003] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.781 [2024-12-14 16:49:19.700603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.781 [2024-12-14 16:49:19.701016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-12-14 16:49:19.701032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.781 [2024-12-14 16:49:19.701040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.781 [2024-12-14 16:49:19.701199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.781 [2024-12-14 16:49:19.701359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.781 [2024-12-14 16:49:19.701369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.781 [2024-12-14 16:49:19.701375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.781 [2024-12-14 16:49:19.701382] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.781 [2024-12-14 16:49:19.713460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.781 [2024-12-14 16:49:19.713864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-12-14 16:49:19.713910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.781 [2024-12-14 16:49:19.713933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.781 [2024-12-14 16:49:19.714516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.781 [2024-12-14 16:49:19.714751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.781 [2024-12-14 16:49:19.714760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.781 [2024-12-14 16:49:19.714770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.781 [2024-12-14 16:49:19.714776] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.781 [2024-12-14 16:49:19.726284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.781 [2024-12-14 16:49:19.726696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.781 [2024-12-14 16:49:19.726713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.781 [2024-12-14 16:49:19.726721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.781 [2024-12-14 16:49:19.726880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.782 [2024-12-14 16:49:19.727040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.782 [2024-12-14 16:49:19.727049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.782 [2024-12-14 16:49:19.727055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.782 [2024-12-14 16:49:19.727061] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.782 [2024-12-14 16:49:19.739076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.782 [2024-12-14 16:49:19.739484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-12-14 16:49:19.739529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.782 [2024-12-14 16:49:19.739552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.782 [2024-12-14 16:49:19.739955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.782 [2024-12-14 16:49:19.740125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.782 [2024-12-14 16:49:19.740134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.782 [2024-12-14 16:49:19.740140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.782 [2024-12-14 16:49:19.740147] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.782 [2024-12-14 16:49:19.751880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.782 [2024-12-14 16:49:19.752204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-12-14 16:49:19.752220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.782 [2024-12-14 16:49:19.752229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.782 [2024-12-14 16:49:19.752387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.782 [2024-12-14 16:49:19.752548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.782 [2024-12-14 16:49:19.752568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.782 [2024-12-14 16:49:19.752576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.782 [2024-12-14 16:49:19.752584] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.782 [2024-12-14 16:49:19.764678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.782 [2024-12-14 16:49:19.765068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-12-14 16:49:19.765085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.782 [2024-12-14 16:49:19.765092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.782 [2024-12-14 16:49:19.765252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.782 [2024-12-14 16:49:19.765412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.782 [2024-12-14 16:49:19.765421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.782 [2024-12-14 16:49:19.765427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.782 [2024-12-14 16:49:19.765434] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.782 [2024-12-14 16:49:19.777529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.782 [2024-12-14 16:49:19.777942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-12-14 16:49:19.777959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.782 [2024-12-14 16:49:19.777967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.782 [2024-12-14 16:49:19.778126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.782 [2024-12-14 16:49:19.778286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.782 [2024-12-14 16:49:19.778295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.782 [2024-12-14 16:49:19.778301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.782 [2024-12-14 16:49:19.778308] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.782 [2024-12-14 16:49:19.790283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.782 [2024-12-14 16:49:19.790671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-12-14 16:49:19.790689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.782 [2024-12-14 16:49:19.790697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.782 [2024-12-14 16:49:19.790857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.782 [2024-12-14 16:49:19.791016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.782 [2024-12-14 16:49:19.791025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.782 [2024-12-14 16:49:19.791031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.782 [2024-12-14 16:49:19.791038] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.782 [2024-12-14 16:49:19.803099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.782 [2024-12-14 16:49:19.803532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-12-14 16:49:19.803590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.782 [2024-12-14 16:49:19.803622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.782 [2024-12-14 16:49:19.804129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.782 [2024-12-14 16:49:19.804289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.782 [2024-12-14 16:49:19.804298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.782 [2024-12-14 16:49:19.804304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.782 [2024-12-14 16:49:19.804311] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.782 [2024-12-14 16:49:19.815916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.782 [2024-12-14 16:49:19.816300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-12-14 16:49:19.816317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.782 [2024-12-14 16:49:19.816324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.782 [2024-12-14 16:49:19.816484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.782 [2024-12-14 16:49:19.816669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.782 [2024-12-14 16:49:19.816678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.782 [2024-12-14 16:49:19.816685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.782 [2024-12-14 16:49:19.816692] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.782 [2024-12-14 16:49:19.828655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.782 [2024-12-14 16:49:19.829129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-12-14 16:49:19.829175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.782 [2024-12-14 16:49:19.829199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.782 [2024-12-14 16:49:19.829798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.782 [2024-12-14 16:49:19.830340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.782 [2024-12-14 16:49:19.830349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.782 [2024-12-14 16:49:19.830356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.782 [2024-12-14 16:49:19.830362] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.782 [2024-12-14 16:49:19.841653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.782 [2024-12-14 16:49:19.842088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-12-14 16:49:19.842132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.782 [2024-12-14 16:49:19.842156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.782 [2024-12-14 16:49:19.842750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.782 [2024-12-14 16:49:19.843235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.782 [2024-12-14 16:49:19.843244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.782 [2024-12-14 16:49:19.843251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.782 [2024-12-14 16:49:19.843258] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:49.782 [2024-12-14 16:49:19.854598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:49.782 [2024-12-14 16:49:19.855014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.782 [2024-12-14 16:49:19.855031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:49.782 [2024-12-14 16:49:19.855038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:49.782 [2024-12-14 16:49:19.855198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:49.782 [2024-12-14 16:49:19.855358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:49.782 [2024-12-14 16:49:19.855367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:49.783 [2024-12-14 16:49:19.855373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:49.783 [2024-12-14 16:49:19.855379] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.043 [2024-12-14 16:49:19.867637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.043 [2024-12-14 16:49:19.868050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.043 [2024-12-14 16:49:19.868067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.043 [2024-12-14 16:49:19.868074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.043 [2024-12-14 16:49:19.868233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.043 [2024-12-14 16:49:19.868393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.043 [2024-12-14 16:49:19.868402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.043 [2024-12-14 16:49:19.868409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.043 [2024-12-14 16:49:19.868415] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.043 [2024-12-14 16:49:19.880503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.043 [2024-12-14 16:49:19.880869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.043 [2024-12-14 16:49:19.880915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.043 [2024-12-14 16:49:19.880939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.043 [2024-12-14 16:49:19.881450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.043 [2024-12-14 16:49:19.881633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.043 [2024-12-14 16:49:19.881643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.043 [2024-12-14 16:49:19.881653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.043 [2024-12-14 16:49:19.881661] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.043 [2024-12-14 16:49:19.893322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.043 [2024-12-14 16:49:19.893731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.043 [2024-12-14 16:49:19.893749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.043 [2024-12-14 16:49:19.893756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.043 [2024-12-14 16:49:19.893916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.043 [2024-12-14 16:49:19.894077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.043 [2024-12-14 16:49:19.894085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.043 [2024-12-14 16:49:19.894091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.043 [2024-12-14 16:49:19.894098] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.043 [2024-12-14 16:49:19.906056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.043 [2024-12-14 16:49:19.906469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.043 [2024-12-14 16:49:19.906487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.043 [2024-12-14 16:49:19.906494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.043 [2024-12-14 16:49:19.906660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.043 [2024-12-14 16:49:19.906821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.043 [2024-12-14 16:49:19.906830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.043 [2024-12-14 16:49:19.906836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.043 [2024-12-14 16:49:19.906843] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.043 [2024-12-14 16:49:19.918881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.043 [2024-12-14 16:49:19.919265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.043 [2024-12-14 16:49:19.919282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.043 [2024-12-14 16:49:19.919289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.043 [2024-12-14 16:49:19.919448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.043 [2024-12-14 16:49:19.919630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.043 [2024-12-14 16:49:19.919640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.043 [2024-12-14 16:49:19.919647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.043 [2024-12-14 16:49:19.919654] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.043 [2024-12-14 16:49:19.931793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.043 [2024-12-14 16:49:19.932237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.043 [2024-12-14 16:49:19.932282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.043 [2024-12-14 16:49:19.932306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.043 [2024-12-14 16:49:19.932898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.043 [2024-12-14 16:49:19.933291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.043 [2024-12-14 16:49:19.933310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.043 [2024-12-14 16:49:19.933324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.043 [2024-12-14 16:49:19.933339] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.044 [2024-12-14 16:49:19.946664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.044 [2024-12-14 16:49:19.947189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.044 [2024-12-14 16:49:19.947239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.044 [2024-12-14 16:49:19.947262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.044 [2024-12-14 16:49:19.947859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.044 [2024-12-14 16:49:19.948442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.044 [2024-12-14 16:49:19.948455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.044 [2024-12-14 16:49:19.948466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.044 [2024-12-14 16:49:19.948476] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.044 [2024-12-14 16:49:19.959635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.044 [2024-12-14 16:49:19.960062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.044 [2024-12-14 16:49:19.960080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.044 [2024-12-14 16:49:19.960087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.044 [2024-12-14 16:49:19.960256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.044 [2024-12-14 16:49:19.960425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.044 [2024-12-14 16:49:19.960434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.044 [2024-12-14 16:49:19.960441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.044 [2024-12-14 16:49:19.960448] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.044 [2024-12-14 16:49:19.972483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.044 [2024-12-14 16:49:19.972893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.044 [2024-12-14 16:49:19.972910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.044 [2024-12-14 16:49:19.972921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.044 [2024-12-14 16:49:19.973081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.044 [2024-12-14 16:49:19.973240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.044 [2024-12-14 16:49:19.973250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.044 [2024-12-14 16:49:19.973256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.044 [2024-12-14 16:49:19.973263] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.044 [2024-12-14 16:49:19.985302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.044 [2024-12-14 16:49:19.985734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.044 [2024-12-14 16:49:19.985752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.044 [2024-12-14 16:49:19.985760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.044 [2024-12-14 16:49:19.985920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.044 [2024-12-14 16:49:19.986080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.044 [2024-12-14 16:49:19.986088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.044 [2024-12-14 16:49:19.986095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.044 [2024-12-14 16:49:19.986101] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.044 [2024-12-14 16:49:19.998280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.044 [2024-12-14 16:49:19.998634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.044 [2024-12-14 16:49:19.998651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.044 [2024-12-14 16:49:19.998659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.044 [2024-12-14 16:49:19.998832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.044 [2024-12-14 16:49:19.998991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.044 [2024-12-14 16:49:19.999000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.044 [2024-12-14 16:49:19.999007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.044 [2024-12-14 16:49:19.999013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.044 [2024-12-14 16:49:20.011412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.044 [2024-12-14 16:49:20.011757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.044 [2024-12-14 16:49:20.011776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.044 [2024-12-14 16:49:20.011786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.044 [2024-12-14 16:49:20.011962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.044 [2024-12-14 16:49:20.012139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.044 [2024-12-14 16:49:20.012150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.044 [2024-12-14 16:49:20.012156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.044 [2024-12-14 16:49:20.012163] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.044 [2024-12-14 16:49:20.024466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.044 [2024-12-14 16:49:20.024894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.044 [2024-12-14 16:49:20.024913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.044 [2024-12-14 16:49:20.024921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.044 [2024-12-14 16:49:20.025090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.044 [2024-12-14 16:49:20.025259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.044 [2024-12-14 16:49:20.025269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.044 [2024-12-14 16:49:20.025276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.044 [2024-12-14 16:49:20.025284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.044 [2024-12-14 16:49:20.037465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.044 [2024-12-14 16:49:20.037808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.044 [2024-12-14 16:49:20.037826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.044 [2024-12-14 16:49:20.037835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.044 [2024-12-14 16:49:20.038010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.044 [2024-12-14 16:49:20.038185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.044 [2024-12-14 16:49:20.038195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.044 [2024-12-14 16:49:20.038203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.044 [2024-12-14 16:49:20.038212] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.044 [2024-12-14 16:49:20.050463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.044 [2024-12-14 16:49:20.050890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.044 [2024-12-14 16:49:20.050907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.044 [2024-12-14 16:49:20.050915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.044 [2024-12-14 16:49:20.051089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.044 [2024-12-14 16:49:20.051265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.044 [2024-12-14 16:49:20.051274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.044 [2024-12-14 16:49:20.051285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.044 [2024-12-14 16:49:20.051293] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.044 [2024-12-14 16:49:20.063449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.044 [2024-12-14 16:49:20.063780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.044 [2024-12-14 16:49:20.063798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.044 [2024-12-14 16:49:20.063807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.044 [2024-12-14 16:49:20.063975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.044 [2024-12-14 16:49:20.064144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.044 [2024-12-14 16:49:20.064153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.044 [2024-12-14 16:49:20.064160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.044 [2024-12-14 16:49:20.064167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.044 [2024-12-14 16:49:20.076454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.044 [2024-12-14 16:49:20.076863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.045 [2024-12-14 16:49:20.076882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.045 [2024-12-14 16:49:20.076891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.045 [2024-12-14 16:49:20.077060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.045 [2024-12-14 16:49:20.077229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.045 [2024-12-14 16:49:20.077238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.045 [2024-12-14 16:49:20.077245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.045 [2024-12-14 16:49:20.077251] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.045 [2024-12-14 16:49:20.089438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.045 [2024-12-14 16:49:20.089931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.045 [2024-12-14 16:49:20.089949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.045 [2024-12-14 16:49:20.089958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.045 [2024-12-14 16:49:20.090127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.045 [2024-12-14 16:49:20.090296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.045 [2024-12-14 16:49:20.090305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.045 [2024-12-14 16:49:20.090313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.045 [2024-12-14 16:49:20.090320] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.045 [2024-12-14 16:49:20.102402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.045 [2024-12-14 16:49:20.102763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.045 [2024-12-14 16:49:20.102781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.045 [2024-12-14 16:49:20.102789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.045 [2024-12-14 16:49:20.102958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.045 [2024-12-14 16:49:20.103127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.045 [2024-12-14 16:49:20.103136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.045 [2024-12-14 16:49:20.103143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.045 [2024-12-14 16:49:20.103150] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.045 [2024-12-14 16:49:20.115323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.045 [2024-12-14 16:49:20.115766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.045 [2024-12-14 16:49:20.115784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.045 [2024-12-14 16:49:20.115792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.045 [2024-12-14 16:49:20.115962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.045 [2024-12-14 16:49:20.116131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.045 [2024-12-14 16:49:20.116141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.045 [2024-12-14 16:49:20.116148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.045 [2024-12-14 16:49:20.116154] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.128424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.128857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.128901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.128926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.305 [2024-12-14 16:49:20.129511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.305 [2024-12-14 16:49:20.130064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.305 [2024-12-14 16:49:20.130074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.305 [2024-12-14 16:49:20.130080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.305 [2024-12-14 16:49:20.130087] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.141450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.141800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.141818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.141829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.305 [2024-12-14 16:49:20.141998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.305 [2024-12-14 16:49:20.142168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.305 [2024-12-14 16:49:20.142177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.305 [2024-12-14 16:49:20.142183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.305 [2024-12-14 16:49:20.142190] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.154273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.154716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.154735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.154743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.305 [2024-12-14 16:49:20.154914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.305 [2024-12-14 16:49:20.155074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.305 [2024-12-14 16:49:20.155083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.305 [2024-12-14 16:49:20.155089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.305 [2024-12-14 16:49:20.155096] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.167262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.167679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.167724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.167748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.305 [2024-12-14 16:49:20.168331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.305 [2024-12-14 16:49:20.168929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.305 [2024-12-14 16:49:20.168956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.305 [2024-12-14 16:49:20.168976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.305 [2024-12-14 16:49:20.168996] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.180219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.180620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.180665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.180689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.305 [2024-12-14 16:49:20.181134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.305 [2024-12-14 16:49:20.181307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.305 [2024-12-14 16:49:20.181317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.305 [2024-12-14 16:49:20.181323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.305 [2024-12-14 16:49:20.181330] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.193104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.193462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.193479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.193487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.305 [2024-12-14 16:49:20.193661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.305 [2024-12-14 16:49:20.193830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.305 [2024-12-14 16:49:20.193840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.305 [2024-12-14 16:49:20.193846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.305 [2024-12-14 16:49:20.193853] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.206072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.206451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.206468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.206476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.305 [2024-12-14 16:49:20.206653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.305 [2024-12-14 16:49:20.206823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.305 [2024-12-14 16:49:20.206832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.305 [2024-12-14 16:49:20.206839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.305 [2024-12-14 16:49:20.206846] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.218937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.219206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.219222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.219229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.305 [2024-12-14 16:49:20.219389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.305 [2024-12-14 16:49:20.219549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.305 [2024-12-14 16:49:20.219565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.305 [2024-12-14 16:49:20.219575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.305 [2024-12-14 16:49:20.219597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.231911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.232341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.232386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.232409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.305 [2024-12-14 16:49:20.233008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.305 [2024-12-14 16:49:20.233411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.305 [2024-12-14 16:49:20.233420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.305 [2024-12-14 16:49:20.233428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.305 [2024-12-14 16:49:20.233435] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.244820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.245217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.245234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.245242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.305 [2024-12-14 16:49:20.245411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.305 [2024-12-14 16:49:20.245585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.305 [2024-12-14 16:49:20.245595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.305 [2024-12-14 16:49:20.245602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.305 [2024-12-14 16:49:20.245609] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.257765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.258068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.258086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.258093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.305 [2024-12-14 16:49:20.258262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.305 [2024-12-14 16:49:20.258430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.305 [2024-12-14 16:49:20.258439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.305 [2024-12-14 16:49:20.258445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.305 [2024-12-14 16:49:20.258452] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.305 [2024-12-14 16:49:20.270669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.305 [2024-12-14 16:49:20.271103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.305 [2024-12-14 16:49:20.271149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.305 [2024-12-14 16:49:20.271172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.306 [2024-12-14 16:49:20.271772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.306 [2024-12-14 16:49:20.272309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.306 [2024-12-14 16:49:20.272319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.306 [2024-12-14 16:49:20.272326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.306 [2024-12-14 16:49:20.272332] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.306 [2024-12-14 16:49:20.283640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.306 [2024-12-14 16:49:20.284053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.306 [2024-12-14 16:49:20.284070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.306 [2024-12-14 16:49:20.284077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.306 [2024-12-14 16:49:20.284254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.306 [2024-12-14 16:49:20.284424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.306 [2024-12-14 16:49:20.284433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.306 [2024-12-14 16:49:20.284440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.306 [2024-12-14 16:49:20.284447] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.306 [2024-12-14 16:49:20.296603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.306 [2024-12-14 16:49:20.297032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.306 [2024-12-14 16:49:20.297075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.306 [2024-12-14 16:49:20.297099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.306 [2024-12-14 16:49:20.297697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.306 [2024-12-14 16:49:20.298296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.306 [2024-12-14 16:49:20.298306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.306 [2024-12-14 16:49:20.298312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.306 [2024-12-14 16:49:20.298319] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.306 [2024-12-14 16:49:20.309553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.306 [2024-12-14 16:49:20.309887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.306 [2024-12-14 16:49:20.309904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.306 [2024-12-14 16:49:20.309916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.306 [2024-12-14 16:49:20.310085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.306 [2024-12-14 16:49:20.310255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.306 [2024-12-14 16:49:20.310265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.306 [2024-12-14 16:49:20.310272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.306 [2024-12-14 16:49:20.310278] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.306 [2024-12-14 16:49:20.322490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.306 [2024-12-14 16:49:20.322850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.306 [2024-12-14 16:49:20.322868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.306 [2024-12-14 16:49:20.322875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.306 [2024-12-14 16:49:20.323043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.306 [2024-12-14 16:49:20.323212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.306 [2024-12-14 16:49:20.323220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.306 [2024-12-14 16:49:20.323227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.306 [2024-12-14 16:49:20.323234] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.306 [2024-12-14 16:49:20.335526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.306 [2024-12-14 16:49:20.335950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.306 [2024-12-14 16:49:20.335968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.306 [2024-12-14 16:49:20.335976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.306 [2024-12-14 16:49:20.336145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.306 [2024-12-14 16:49:20.336313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.306 [2024-12-14 16:49:20.336322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.306 [2024-12-14 16:49:20.336328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.306 [2024-12-14 16:49:20.336335] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.306 [2024-12-14 16:49:20.348553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.306 [2024-12-14 16:49:20.349003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.306 [2024-12-14 16:49:20.349049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.306 [2024-12-14 16:49:20.349073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.306 [2024-12-14 16:49:20.349590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.306 [2024-12-14 16:49:20.349763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.306 [2024-12-14 16:49:20.349772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.306 [2024-12-14 16:49:20.349779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.306 [2024-12-14 16:49:20.349785] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.306 [2024-12-14 16:49:20.361512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.306 [2024-12-14 16:49:20.361972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.306 [2024-12-14 16:49:20.362019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.306 [2024-12-14 16:49:20.362042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.306 [2024-12-14 16:49:20.362475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.306 [2024-12-14 16:49:20.362655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.306 [2024-12-14 16:49:20.362664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.306 [2024-12-14 16:49:20.362671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.306 [2024-12-14 16:49:20.362678] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.306 [2024-12-14 16:49:20.374542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.306 [2024-12-14 16:49:20.374830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.306 [2024-12-14 16:49:20.374848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.306 [2024-12-14 16:49:20.374856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.306 [2024-12-14 16:49:20.375024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.306 [2024-12-14 16:49:20.375192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.306 [2024-12-14 16:49:20.375202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.306 [2024-12-14 16:49:20.375209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.306 [2024-12-14 16:49:20.375215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.306 [2024-12-14 16:49:20.387652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.306 [2024-12-14 16:49:20.388090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.306 [2024-12-14 16:49:20.388135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.306 [2024-12-14 16:49:20.388159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.306 [2024-12-14 16:49:20.388602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.306 [2024-12-14 16:49:20.388777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.306 [2024-12-14 16:49:20.388785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.306 [2024-12-14 16:49:20.388795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.306 [2024-12-14 16:49:20.388802] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.566 [2024-12-14 16:49:20.400587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.566 [2024-12-14 16:49:20.400917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.566 [2024-12-14 16:49:20.400934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.566 [2024-12-14 16:49:20.400942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.566 [2024-12-14 16:49:20.401110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.566 [2024-12-14 16:49:20.401279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.566 [2024-12-14 16:49:20.401288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.566 [2024-12-14 16:49:20.401295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.566 [2024-12-14 16:49:20.401301] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.566 [2024-12-14 16:49:20.413582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.566 [2024-12-14 16:49:20.413963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.566 [2024-12-14 16:49:20.413982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.566 [2024-12-14 16:49:20.413990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.566 [2024-12-14 16:49:20.414161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.566 [2024-12-14 16:49:20.414321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.566 [2024-12-14 16:49:20.414331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.566 [2024-12-14 16:49:20.414338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.566 [2024-12-14 16:49:20.414344] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.566 [2024-12-14 16:49:20.426519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.566 [2024-12-14 16:49:20.426886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.566 [2024-12-14 16:49:20.426903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.566 [2024-12-14 16:49:20.426911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.566 [2024-12-14 16:49:20.427070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.566 [2024-12-14 16:49:20.427230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.566 [2024-12-14 16:49:20.427239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.566 [2024-12-14 16:49:20.427246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.566 [2024-12-14 16:49:20.427253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.566 [2024-12-14 16:49:20.439428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.566 [2024-12-14 16:49:20.439836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.566 [2024-12-14 16:49:20.439853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.566 [2024-12-14 16:49:20.439862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.566 [2024-12-14 16:49:20.440031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.566 [2024-12-14 16:49:20.440199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.566 [2024-12-14 16:49:20.440210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.566 [2024-12-14 16:49:20.440216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.566 [2024-12-14 16:49:20.440222] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.566 [2024-12-14 16:49:20.452437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.566 [2024-12-14 16:49:20.452737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.566 [2024-12-14 16:49:20.452756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.566 [2024-12-14 16:49:20.452764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.566 [2024-12-14 16:49:20.452932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.566 [2024-12-14 16:49:20.453102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.566 [2024-12-14 16:49:20.453112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.566 [2024-12-14 16:49:20.453118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.566 [2024-12-14 16:49:20.453125] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.566 [2024-12-14 16:49:20.465401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.566 [2024-12-14 16:49:20.465810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.566 [2024-12-14 16:49:20.465828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.566 [2024-12-14 16:49:20.465836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.566 [2024-12-14 16:49:20.466018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.566 [2024-12-14 16:49:20.466188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.567 [2024-12-14 16:49:20.466197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.567 [2024-12-14 16:49:20.466204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.567 [2024-12-14 16:49:20.466211] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.567 [2024-12-14 16:49:20.478425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.567 [2024-12-14 16:49:20.478884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.567 [2024-12-14 16:49:20.478930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.567 [2024-12-14 16:49:20.478961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.567 [2024-12-14 16:49:20.479545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.567 [2024-12-14 16:49:20.479897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.567 [2024-12-14 16:49:20.479907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.567 [2024-12-14 16:49:20.479913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.567 [2024-12-14 16:49:20.479921] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.567 [2024-12-14 16:49:20.491518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.567 [2024-12-14 16:49:20.491934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.567 [2024-12-14 16:49:20.491982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.567 [2024-12-14 16:49:20.492011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.567 [2024-12-14 16:49:20.492516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.567 [2024-12-14 16:49:20.492697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.567 [2024-12-14 16:49:20.492707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.567 [2024-12-14 16:49:20.492715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.567 [2024-12-14 16:49:20.492721] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.567 [2024-12-14 16:49:20.504620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.567 [2024-12-14 16:49:20.504970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.567 [2024-12-14 16:49:20.505021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.567 [2024-12-14 16:49:20.505046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.567 [2024-12-14 16:49:20.505595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.567 [2024-12-14 16:49:20.505785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.567 [2024-12-14 16:49:20.505795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.567 [2024-12-14 16:49:20.505801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.567 [2024-12-14 16:49:20.505808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.567 [2024-12-14 16:49:20.517539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.567 [2024-12-14 16:49:20.517894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.567 [2024-12-14 16:49:20.517912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.567 [2024-12-14 16:49:20.517919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.567 [2024-12-14 16:49:20.518089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.567 [2024-12-14 16:49:20.518261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.567 [2024-12-14 16:49:20.518270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.567 [2024-12-14 16:49:20.518277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.567 [2024-12-14 16:49:20.518284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.567 [2024-12-14 16:49:20.530565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.567 [2024-12-14 16:49:20.530978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.567 [2024-12-14 16:49:20.530996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.567 [2024-12-14 16:49:20.531004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.567 [2024-12-14 16:49:20.531174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.567 [2024-12-14 16:49:20.531343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.567 [2024-12-14 16:49:20.531352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.567 [2024-12-14 16:49:20.531358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.567 [2024-12-14 16:49:20.531366] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.567 [2024-12-14 16:49:20.543583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.567 [2024-12-14 16:49:20.543936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.567 [2024-12-14 16:49:20.543954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.567 [2024-12-14 16:49:20.543962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.567 [2024-12-14 16:49:20.544131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.567 [2024-12-14 16:49:20.544298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.567 [2024-12-14 16:49:20.544307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.567 [2024-12-14 16:49:20.544314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.567 [2024-12-14 16:49:20.544320] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.567 [2024-12-14 16:49:20.556501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.567 [2024-12-14 16:49:20.556801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.567 [2024-12-14 16:49:20.556819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.567 [2024-12-14 16:49:20.556827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.567 [2024-12-14 16:49:20.556996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.567 [2024-12-14 16:49:20.557165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.567 [2024-12-14 16:49:20.557174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.567 [2024-12-14 16:49:20.557185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.567 [2024-12-14 16:49:20.557192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.567 [2024-12-14 16:49:20.569400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.567 [2024-12-14 16:49:20.569724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.567 [2024-12-14 16:49:20.569742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.567 [2024-12-14 16:49:20.569751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.567 [2024-12-14 16:49:20.569924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.567 [2024-12-14 16:49:20.570099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.567 [2024-12-14 16:49:20.570109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.567 [2024-12-14 16:49:20.570116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.567 [2024-12-14 16:49:20.570122] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.567 [2024-12-14 16:49:20.582334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.567 [2024-12-14 16:49:20.582725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.567 [2024-12-14 16:49:20.582744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.567 [2024-12-14 16:49:20.582752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.567 [2024-12-14 16:49:20.582921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.567 [2024-12-14 16:49:20.583090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.567 [2024-12-14 16:49:20.583099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.567 [2024-12-14 16:49:20.583105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.567 [2024-12-14 16:49:20.583113] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.567 [2024-12-14 16:49:20.595348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.567 [2024-12-14 16:49:20.595702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.567 [2024-12-14 16:49:20.595721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.567 [2024-12-14 16:49:20.595729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.567 [2024-12-14 16:49:20.595903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.567 [2024-12-14 16:49:20.596077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.567 [2024-12-14 16:49:20.596086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.568 [2024-12-14 16:49:20.596093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.568 [2024-12-14 16:49:20.596101] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.568 [2024-12-14 16:49:20.608334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.568 [2024-12-14 16:49:20.608625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.568 [2024-12-14 16:49:20.608644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.568 [2024-12-14 16:49:20.608652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.568 [2024-12-14 16:49:20.608827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.568 [2024-12-14 16:49:20.609001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.568 [2024-12-14 16:49:20.609011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.568 [2024-12-14 16:49:20.609017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.568 [2024-12-14 16:49:20.609024] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.568 7838.25 IOPS, 30.62 MiB/s [2024-12-14T15:49:20.654Z] [2024-12-14 16:49:20.621402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.568 [2024-12-14 16:49:20.621709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.568 [2024-12-14 16:49:20.621727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.568 [2024-12-14 16:49:20.621736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.568 [2024-12-14 16:49:20.621909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.568 [2024-12-14 16:49:20.622083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.568 [2024-12-14 16:49:20.622093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.568 [2024-12-14 16:49:20.622100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.568 [2024-12-14 16:49:20.622106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.568 [2024-12-14 16:49:20.634487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.568 [2024-12-14 16:49:20.634850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.568 [2024-12-14 16:49:20.634868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.568 [2024-12-14 16:49:20.634876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.568 [2024-12-14 16:49:20.635049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.568 [2024-12-14 16:49:20.635222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.568 [2024-12-14 16:49:20.635232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.568 [2024-12-14 16:49:20.635239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.568 [2024-12-14 16:49:20.635245] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.568 [2024-12-14 16:49:20.647459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.568 [2024-12-14 16:49:20.647840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.568 [2024-12-14 16:49:20.647858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.568 [2024-12-14 16:49:20.647870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.568 [2024-12-14 16:49:20.648045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.568 [2024-12-14 16:49:20.648219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.568 [2024-12-14 16:49:20.648229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.568 [2024-12-14 16:49:20.648236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.568 [2024-12-14 16:49:20.648242] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.828 [2024-12-14 16:49:20.660443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.828 [2024-12-14 16:49:20.660740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-12-14 16:49:20.660758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.828 [2024-12-14 16:49:20.660766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.828 [2024-12-14 16:49:20.660939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.828 [2024-12-14 16:49:20.661112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.828 [2024-12-14 16:49:20.661122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.828 [2024-12-14 16:49:20.661129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.828 [2024-12-14 16:49:20.661136] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.828 [2024-12-14 16:49:20.673398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.828 [2024-12-14 16:49:20.673812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-12-14 16:49:20.673830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.828 [2024-12-14 16:49:20.673838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.828 [2024-12-14 16:49:20.674011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.828 [2024-12-14 16:49:20.674185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.828 [2024-12-14 16:49:20.674195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.828 [2024-12-14 16:49:20.674201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.828 [2024-12-14 16:49:20.674208] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.828 [2024-12-14 16:49:20.686450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.828 [2024-12-14 16:49:20.686807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-12-14 16:49:20.686824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.828 [2024-12-14 16:49:20.686832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.828 [2024-12-14 16:49:20.687001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.828 [2024-12-14 16:49:20.687173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.828 [2024-12-14 16:49:20.687183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.828 [2024-12-14 16:49:20.687189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.828 [2024-12-14 16:49:20.687197] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.828 [2024-12-14 16:49:20.699388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.828 [2024-12-14 16:49:20.699780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-12-14 16:49:20.699798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.828 [2024-12-14 16:49:20.699806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.828 [2024-12-14 16:49:20.699975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.828 [2024-12-14 16:49:20.700144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.828 [2024-12-14 16:49:20.700153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.828 [2024-12-14 16:49:20.700159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.828 [2024-12-14 16:49:20.700167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.828 [2024-12-14 16:49:20.712399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.828 [2024-12-14 16:49:20.712692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.828 [2024-12-14 16:49:20.712710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.828 [2024-12-14 16:49:20.712719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.828 [2024-12-14 16:49:20.712901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.828 [2024-12-14 16:49:20.713071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.828 [2024-12-14 16:49:20.713080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.828 [2024-12-14 16:49:20.713087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.828 [2024-12-14 16:49:20.713093] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.828 [2024-12-14 16:49:20.725439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.828 [2024-12-14 16:49:20.725790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-12-14 16:49:20.725807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.829 [2024-12-14 16:49:20.725815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.829 [2024-12-14 16:49:20.725983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.829 [2024-12-14 16:49:20.726151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.829 [2024-12-14 16:49:20.726161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.829 [2024-12-14 16:49:20.726171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.829 [2024-12-14 16:49:20.726178] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.829 [2024-12-14 16:49:20.738366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.829 [2024-12-14 16:49:20.738793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-12-14 16:49:20.738811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.829 [2024-12-14 16:49:20.738818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.829 [2024-12-14 16:49:20.738987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.829 [2024-12-14 16:49:20.739155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.829 [2024-12-14 16:49:20.739164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.829 [2024-12-14 16:49:20.739171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.829 [2024-12-14 16:49:20.739178] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.829 [2024-12-14 16:49:20.751364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.829 [2024-12-14 16:49:20.751752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-12-14 16:49:20.751769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.829 [2024-12-14 16:49:20.751777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.829 [2024-12-14 16:49:20.751945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.829 [2024-12-14 16:49:20.752114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.829 [2024-12-14 16:49:20.752123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.829 [2024-12-14 16:49:20.752130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.829 [2024-12-14 16:49:20.752137] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.829 [2024-12-14 16:49:20.764327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.829 [2024-12-14 16:49:20.764779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-12-14 16:49:20.764797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.829 [2024-12-14 16:49:20.764805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.829 [2024-12-14 16:49:20.764974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.829 [2024-12-14 16:49:20.765142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.829 [2024-12-14 16:49:20.765152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.829 [2024-12-14 16:49:20.765159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.829 [2024-12-14 16:49:20.765166] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.829 [2024-12-14 16:49:20.777357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.829 [2024-12-14 16:49:20.777776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-12-14 16:49:20.777795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.829 [2024-12-14 16:49:20.777803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.829 [2024-12-14 16:49:20.777977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.829 [2024-12-14 16:49:20.778152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.829 [2024-12-14 16:49:20.778161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.829 [2024-12-14 16:49:20.778168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.829 [2024-12-14 16:49:20.778175] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.829 [2024-12-14 16:49:20.790396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.829 [2024-12-14 16:49:20.790756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-12-14 16:49:20.790774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.829 [2024-12-14 16:49:20.790781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.829 [2024-12-14 16:49:20.790950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.829 [2024-12-14 16:49:20.791119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.829 [2024-12-14 16:49:20.791128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.829 [2024-12-14 16:49:20.791134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.829 [2024-12-14 16:49:20.791141] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.829 [2024-12-14 16:49:20.803397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.829 [2024-12-14 16:49:20.803688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-12-14 16:49:20.803706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.829 [2024-12-14 16:49:20.803714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.829 [2024-12-14 16:49:20.803881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.829 [2024-12-14 16:49:20.804056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.829 [2024-12-14 16:49:20.804065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.829 [2024-12-14 16:49:20.804072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.829 [2024-12-14 16:49:20.804079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.829 [2024-12-14 16:49:20.816337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.829 [2024-12-14 16:49:20.816750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-12-14 16:49:20.816768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.829 [2024-12-14 16:49:20.816780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.829 [2024-12-14 16:49:20.816949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.829 [2024-12-14 16:49:20.817120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.829 [2024-12-14 16:49:20.817129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.829 [2024-12-14 16:49:20.817135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.829 [2024-12-14 16:49:20.817142] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.829 [2024-12-14 16:49:20.829335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.829 [2024-12-14 16:49:20.829715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-12-14 16:49:20.829734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.829 [2024-12-14 16:49:20.829742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.829 [2024-12-14 16:49:20.829911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.829 [2024-12-14 16:49:20.830080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.829 [2024-12-14 16:49:20.830089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.829 [2024-12-14 16:49:20.830096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.829 [2024-12-14 16:49:20.830103] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.829 [2024-12-14 16:49:20.842295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.829 [2024-12-14 16:49:20.842667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-12-14 16:49:20.842685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.829 [2024-12-14 16:49:20.842693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.829 [2024-12-14 16:49:20.842862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.829 [2024-12-14 16:49:20.843030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.829 [2024-12-14 16:49:20.843040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.829 [2024-12-14 16:49:20.843047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.829 [2024-12-14 16:49:20.843054] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.829 [2024-12-14 16:49:20.855234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.829 [2024-12-14 16:49:20.855683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.829 [2024-12-14 16:49:20.855701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.830 [2024-12-14 16:49:20.855709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.830 [2024-12-14 16:49:20.855885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.830 [2024-12-14 16:49:20.856048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.830 [2024-12-14 16:49:20.856058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.830 [2024-12-14 16:49:20.856064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.830 [2024-12-14 16:49:20.856070] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.830 [2024-12-14 16:49:20.868228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.830 [2024-12-14 16:49:20.869334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-12-14 16:49:20.869358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.830 [2024-12-14 16:49:20.869366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.830 [2024-12-14 16:49:20.869534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.830 [2024-12-14 16:49:20.869727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.830 [2024-12-14 16:49:20.869736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.830 [2024-12-14 16:49:20.869743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.830 [2024-12-14 16:49:20.869750] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.830 [2024-12-14 16:49:20.881276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.830 [2024-12-14 16:49:20.881623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-12-14 16:49:20.881641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.830 [2024-12-14 16:49:20.881650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.830 [2024-12-14 16:49:20.881820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.830 [2024-12-14 16:49:20.881989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.830 [2024-12-14 16:49:20.881998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.830 [2024-12-14 16:49:20.882004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.830 [2024-12-14 16:49:20.882011] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.830 [2024-12-14 16:49:20.894198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.830 [2024-12-14 16:49:20.894624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-12-14 16:49:20.894642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.830 [2024-12-14 16:49:20.894651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.830 [2024-12-14 16:49:20.894820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.830 [2024-12-14 16:49:20.894990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.830 [2024-12-14 16:49:20.894999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.830 [2024-12-14 16:49:20.895009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.830 [2024-12-14 16:49:20.895017] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:50.830 [2024-12-14 16:49:20.907210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:50.830 [2024-12-14 16:49:20.907587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.830 [2024-12-14 16:49:20.907605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:50.830 [2024-12-14 16:49:20.907613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:50.830 [2024-12-14 16:49:20.907787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:50.830 [2024-12-14 16:49:20.907960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:50.830 [2024-12-14 16:49:20.907969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:50.830 [2024-12-14 16:49:20.907976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:50.830 [2024-12-14 16:49:20.907983] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.090 [2024-12-14 16:49:20.920311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.090 [2024-12-14 16:49:20.920733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.090 [2024-12-14 16:49:20.920751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.090 [2024-12-14 16:49:20.920759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.090 [2024-12-14 16:49:20.920928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.090 [2024-12-14 16:49:20.921096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.090 [2024-12-14 16:49:20.921106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.090 [2024-12-14 16:49:20.921112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.090 [2024-12-14 16:49:20.921119] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.090 [2024-12-14 16:49:20.933341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.090 [2024-12-14 16:49:20.933763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.090 [2024-12-14 16:49:20.933782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.090 [2024-12-14 16:49:20.933790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.090 [2024-12-14 16:49:20.933958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.090 [2024-12-14 16:49:20.934127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.090 [2024-12-14 16:49:20.934137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.090 [2024-12-14 16:49:20.934144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.090 [2024-12-14 16:49:20.934151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.090 [2024-12-14 16:49:20.946320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.090 [2024-12-14 16:49:20.946750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.090 [2024-12-14 16:49:20.946767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.090 [2024-12-14 16:49:20.946775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.090 [2024-12-14 16:49:20.946944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.090 [2024-12-14 16:49:20.947112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.090 [2024-12-14 16:49:20.947121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.090 [2024-12-14 16:49:20.947128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.090 [2024-12-14 16:49:20.947134] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.090 [2024-12-14 16:49:20.959301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.090 [2024-12-14 16:49:20.959727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.090 [2024-12-14 16:49:20.959744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.090 [2024-12-14 16:49:20.959752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.090 [2024-12-14 16:49:20.959921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.090 [2024-12-14 16:49:20.960090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.090 [2024-12-14 16:49:20.960099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.090 [2024-12-14 16:49:20.960105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.090 [2024-12-14 16:49:20.960113] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.091 [2024-12-14 16:49:20.972277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.091 [2024-12-14 16:49:20.972707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.091 [2024-12-14 16:49:20.972724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.091 [2024-12-14 16:49:20.972732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.091 [2024-12-14 16:49:20.972901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.091 [2024-12-14 16:49:20.973069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.091 [2024-12-14 16:49:20.973078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.091 [2024-12-14 16:49:20.973085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.091 [2024-12-14 16:49:20.973092] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.091 [2024-12-14 16:49:20.985251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.091 [2024-12-14 16:49:20.985700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.091 [2024-12-14 16:49:20.985719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.091 [2024-12-14 16:49:20.985730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.091 [2024-12-14 16:49:20.985900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.091 [2024-12-14 16:49:20.986068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.091 [2024-12-14 16:49:20.986078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.091 [2024-12-14 16:49:20.986084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.091 [2024-12-14 16:49:20.986090] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.091 [2024-12-14 16:49:20.998172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.091 [2024-12-14 16:49:20.998593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.091 [2024-12-14 16:49:20.998611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.091 [2024-12-14 16:49:20.998619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.091 [2024-12-14 16:49:20.998787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.091 [2024-12-14 16:49:20.998955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.091 [2024-12-14 16:49:20.998965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.091 [2024-12-14 16:49:20.998971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.091 [2024-12-14 16:49:20.998978] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.091 [2024-12-14 16:49:21.011142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.091 [2024-12-14 16:49:21.011565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.091 [2024-12-14 16:49:21.011583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.091 [2024-12-14 16:49:21.011590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.091 [2024-12-14 16:49:21.011759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.091 [2024-12-14 16:49:21.011928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.091 [2024-12-14 16:49:21.011938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.091 [2024-12-14 16:49:21.011945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.091 [2024-12-14 16:49:21.011951] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.091 [2024-12-14 16:49:21.024032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.091 [2024-12-14 16:49:21.024449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.091 [2024-12-14 16:49:21.024467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.091 [2024-12-14 16:49:21.024475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.091 [2024-12-14 16:49:21.024650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.091 [2024-12-14 16:49:21.024823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.091 [2024-12-14 16:49:21.024832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.091 [2024-12-14 16:49:21.024839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.091 [2024-12-14 16:49:21.024846] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.091 [2024-12-14 16:49:21.037001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.091 [2024-12-14 16:49:21.037396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.091 [2024-12-14 16:49:21.037414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.091 [2024-12-14 16:49:21.037422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.091 [2024-12-14 16:49:21.037596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.091 [2024-12-14 16:49:21.037765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.091 [2024-12-14 16:49:21.037775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.091 [2024-12-14 16:49:21.037781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.091 [2024-12-14 16:49:21.037788] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.091 [2024-12-14 16:49:21.049945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.091 [2024-12-14 16:49:21.050340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.091 [2024-12-14 16:49:21.050357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.091 [2024-12-14 16:49:21.050365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.091 [2024-12-14 16:49:21.050533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.091 [2024-12-14 16:49:21.050706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.091 [2024-12-14 16:49:21.050716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.091 [2024-12-14 16:49:21.050722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.091 [2024-12-14 16:49:21.050729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.091 [2024-12-14 16:49:21.062907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.091 [2024-12-14 16:49:21.063282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.091 [2024-12-14 16:49:21.063299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.091 [2024-12-14 16:49:21.063307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.091 [2024-12-14 16:49:21.063476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.091 [2024-12-14 16:49:21.063667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.091 [2024-12-14 16:49:21.063677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.091 [2024-12-14 16:49:21.063688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.091 [2024-12-14 16:49:21.063696] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.091 [2024-12-14 16:49:21.075874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.091 [2024-12-14 16:49:21.076271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.091 [2024-12-14 16:49:21.076289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.091 [2024-12-14 16:49:21.076297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.091 [2024-12-14 16:49:21.076466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.091 [2024-12-14 16:49:21.076643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.091 [2024-12-14 16:49:21.076653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.091 [2024-12-14 16:49:21.076659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.091 [2024-12-14 16:49:21.076667] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.091 [2024-12-14 16:49:21.088819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.091 [2024-12-14 16:49:21.089155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.091 [2024-12-14 16:49:21.089173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.091 [2024-12-14 16:49:21.089181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.091 [2024-12-14 16:49:21.089349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.091 [2024-12-14 16:49:21.089517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.091 [2024-12-14 16:49:21.089527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.091 [2024-12-14 16:49:21.089534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.091 [2024-12-14 16:49:21.089540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.091 [2024-12-14 16:49:21.101794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.091 [2024-12-14 16:49:21.102140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.092 [2024-12-14 16:49:21.102158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.092 [2024-12-14 16:49:21.102166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.092 [2024-12-14 16:49:21.102334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.092 [2024-12-14 16:49:21.102503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.092 [2024-12-14 16:49:21.102513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.092 [2024-12-14 16:49:21.102519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.092 [2024-12-14 16:49:21.102525] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.092 [2024-12-14 16:49:21.114782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.092 [2024-12-14 16:49:21.115226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.092 [2024-12-14 16:49:21.115244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.092 [2024-12-14 16:49:21.115252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.092 [2024-12-14 16:49:21.115420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.092 [2024-12-14 16:49:21.115593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.092 [2024-12-14 16:49:21.115603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.092 [2024-12-14 16:49:21.115610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.092 [2024-12-14 16:49:21.115617] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.092 [2024-12-14 16:49:21.127783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.092 [2024-12-14 16:49:21.128248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.092 [2024-12-14 16:49:21.128265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.092 [2024-12-14 16:49:21.128273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.092 [2024-12-14 16:49:21.128442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.092 [2024-12-14 16:49:21.128616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.092 [2024-12-14 16:49:21.128626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.092 [2024-12-14 16:49:21.128633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.092 [2024-12-14 16:49:21.128640] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.092 [2024-12-14 16:49:21.140832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.092 [2024-12-14 16:49:21.141236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.092 [2024-12-14 16:49:21.141252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.092 [2024-12-14 16:49:21.141260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.092 [2024-12-14 16:49:21.141428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.092 [2024-12-14 16:49:21.141602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.092 [2024-12-14 16:49:21.141612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.092 [2024-12-14 16:49:21.141619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.092 [2024-12-14 16:49:21.141626] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.092 [2024-12-14 16:49:21.153789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.092 [2024-12-14 16:49:21.154234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.092 [2024-12-14 16:49:21.154252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.092 [2024-12-14 16:49:21.154263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.092 [2024-12-14 16:49:21.154432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.092 [2024-12-14 16:49:21.154606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.092 [2024-12-14 16:49:21.154616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.092 [2024-12-14 16:49:21.154622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.092 [2024-12-14 16:49:21.154630] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.092 [2024-12-14 16:49:21.166786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.092 [2024-12-14 16:49:21.167184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.092 [2024-12-14 16:49:21.167202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.092 [2024-12-14 16:49:21.167210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.092 [2024-12-14 16:49:21.167379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.092 [2024-12-14 16:49:21.167547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.092 [2024-12-14 16:49:21.167562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.092 [2024-12-14 16:49:21.167569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.092 [2024-12-14 16:49:21.167578] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.352 [2024-12-14 16:49:21.179859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.352 [2024-12-14 16:49:21.180287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-14 16:49:21.180305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.352 [2024-12-14 16:49:21.180313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.352 [2024-12-14 16:49:21.180486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.352 [2024-12-14 16:49:21.180666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.352 [2024-12-14 16:49:21.180676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.352 [2024-12-14 16:49:21.180683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.352 [2024-12-14 16:49:21.180690] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.352 [2024-12-14 16:49:21.192756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.352 [2024-12-14 16:49:21.193176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-14 16:49:21.193193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.352 [2024-12-14 16:49:21.193201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.352 [2024-12-14 16:49:21.193369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.352 [2024-12-14 16:49:21.193541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.352 [2024-12-14 16:49:21.193551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.352 [2024-12-14 16:49:21.193564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.352 [2024-12-14 16:49:21.193571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.352 [2024-12-14 16:49:21.205637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.352 [2024-12-14 16:49:21.206052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-14 16:49:21.206069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.352 [2024-12-14 16:49:21.206077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.352 [2024-12-14 16:49:21.206246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.352 [2024-12-14 16:49:21.206414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.352 [2024-12-14 16:49:21.206423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.353 [2024-12-14 16:49:21.206430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.353 [2024-12-14 16:49:21.206437] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.353 [2024-12-14 16:49:21.218621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.353 [2024-12-14 16:49:21.219037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-14 16:49:21.219055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.353 [2024-12-14 16:49:21.219062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.353 [2024-12-14 16:49:21.219230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.353 [2024-12-14 16:49:21.219399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.353 [2024-12-14 16:49:21.219408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.353 [2024-12-14 16:49:21.219414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.353 [2024-12-14 16:49:21.219421] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.353 [2024-12-14 16:49:21.231585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.353 [2024-12-14 16:49:21.231934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-14 16:49:21.231952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.353 [2024-12-14 16:49:21.231959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.353 [2024-12-14 16:49:21.232127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.353 [2024-12-14 16:49:21.232296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.353 [2024-12-14 16:49:21.232305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.353 [2024-12-14 16:49:21.232315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.353 [2024-12-14 16:49:21.232323] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.353 [2024-12-14 16:49:21.244487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.353 [2024-12-14 16:49:21.244913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-14 16:49:21.244930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.353 [2024-12-14 16:49:21.244938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.353 [2024-12-14 16:49:21.245106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.353 [2024-12-14 16:49:21.245275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.353 [2024-12-14 16:49:21.245284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.353 [2024-12-14 16:49:21.245291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.353 [2024-12-14 16:49:21.245297] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.353 [2024-12-14 16:49:21.257456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.353 [2024-12-14 16:49:21.257879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-14 16:49:21.257897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.353 [2024-12-14 16:49:21.257904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.353 [2024-12-14 16:49:21.258073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.353 [2024-12-14 16:49:21.258241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.353 [2024-12-14 16:49:21.258251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.353 [2024-12-14 16:49:21.258257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.353 [2024-12-14 16:49:21.258264] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.353 [2024-12-14 16:49:21.270341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.353 [2024-12-14 16:49:21.270762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-14 16:49:21.270779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.353 [2024-12-14 16:49:21.270787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.353 [2024-12-14 16:49:21.270955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.353 [2024-12-14 16:49:21.271124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.353 [2024-12-14 16:49:21.271133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.353 [2024-12-14 16:49:21.271140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.353 [2024-12-14 16:49:21.271147] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.353 [2024-12-14 16:49:21.283311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.353 [2024-12-14 16:49:21.283708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-14 16:49:21.283726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.353 [2024-12-14 16:49:21.283744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.353 [2024-12-14 16:49:21.283904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.353 [2024-12-14 16:49:21.284064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.353 [2024-12-14 16:49:21.284073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.353 [2024-12-14 16:49:21.284079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.353 [2024-12-14 16:49:21.284086] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.353 [2024-12-14 16:49:21.296226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.353 [2024-12-14 16:49:21.296645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-14 16:49:21.296663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.353 [2024-12-14 16:49:21.296672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.353 [2024-12-14 16:49:21.296840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.353 [2024-12-14 16:49:21.297008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.353 [2024-12-14 16:49:21.297018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.353 [2024-12-14 16:49:21.297025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.353 [2024-12-14 16:49:21.297032] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.353 [2024-12-14 16:49:21.309118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.353 [2024-12-14 16:49:21.309544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-14 16:49:21.309566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.353 [2024-12-14 16:49:21.309575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.353 [2024-12-14 16:49:21.309743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.353 [2024-12-14 16:49:21.309912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.353 [2024-12-14 16:49:21.309922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.353 [2024-12-14 16:49:21.309928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.353 [2024-12-14 16:49:21.309935] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.353 [2024-12-14 16:49:21.322145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.353 [2024-12-14 16:49:21.322540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-14 16:49:21.322562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.353 [2024-12-14 16:49:21.322575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.353 [2024-12-14 16:49:21.322744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.353 [2024-12-14 16:49:21.322914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.353 [2024-12-14 16:49:21.322923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.353 [2024-12-14 16:49:21.322929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.353 [2024-12-14 16:49:21.322936] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.353 [2024-12-14 16:49:21.335102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.353 [2024-12-14 16:49:21.335528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-14 16:49:21.335544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.353 [2024-12-14 16:49:21.335552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.353 [2024-12-14 16:49:21.335726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.353 [2024-12-14 16:49:21.335894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.353 [2024-12-14 16:49:21.335904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.353 [2024-12-14 16:49:21.335910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.354 [2024-12-14 16:49:21.335917] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.354 [2024-12-14 16:49:21.348274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.354 [2024-12-14 16:49:21.348624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-14 16:49:21.348643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.354 [2024-12-14 16:49:21.348650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.354 [2024-12-14 16:49:21.348820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.354 [2024-12-14 16:49:21.348988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.354 [2024-12-14 16:49:21.348998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.354 [2024-12-14 16:49:21.349004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.354 [2024-12-14 16:49:21.349012] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.354 [2024-12-14 16:49:21.361308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.354 [2024-12-14 16:49:21.361666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-14 16:49:21.361686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.354 [2024-12-14 16:49:21.361695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.354 [2024-12-14 16:49:21.361864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.354 [2024-12-14 16:49:21.362036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.354 [2024-12-14 16:49:21.362046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.354 [2024-12-14 16:49:21.362053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.354 [2024-12-14 16:49:21.362059] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.354 [2024-12-14 16:49:21.374257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.354 [2024-12-14 16:49:21.374684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-14 16:49:21.374703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.354 [2024-12-14 16:49:21.374711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.354 [2024-12-14 16:49:21.374879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.354 [2024-12-14 16:49:21.375047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.354 [2024-12-14 16:49:21.375056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.354 [2024-12-14 16:49:21.375063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.354 [2024-12-14 16:49:21.375070] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.354 [2024-12-14 16:49:21.387219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.354 [2024-12-14 16:49:21.387663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-14 16:49:21.387681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.354 [2024-12-14 16:49:21.387689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.354 [2024-12-14 16:49:21.387858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.354 [2024-12-14 16:49:21.388027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.354 [2024-12-14 16:49:21.388036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.354 [2024-12-14 16:49:21.388043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.354 [2024-12-14 16:49:21.388050] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.354 [2024-12-14 16:49:21.400298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.354 [2024-12-14 16:49:21.400699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-14 16:49:21.400717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.354 [2024-12-14 16:49:21.400725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.354 [2024-12-14 16:49:21.400894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.354 [2024-12-14 16:49:21.401062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.354 [2024-12-14 16:49:21.401072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.354 [2024-12-14 16:49:21.401081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.354 [2024-12-14 16:49:21.401089] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.354 [2024-12-14 16:49:21.413263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.354 [2024-12-14 16:49:21.413804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-14 16:49:21.413824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.354 [2024-12-14 16:49:21.413833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.354 [2024-12-14 16:49:21.414003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.354 [2024-12-14 16:49:21.414173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.354 [2024-12-14 16:49:21.414182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.354 [2024-12-14 16:49:21.414189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.354 [2024-12-14 16:49:21.414196] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.354 [2024-12-14 16:49:21.426361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.354 [2024-12-14 16:49:21.426719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-14 16:49:21.426737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.354 [2024-12-14 16:49:21.426746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.354 [2024-12-14 16:49:21.426914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.354 [2024-12-14 16:49:21.427083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.354 [2024-12-14 16:49:21.427092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.354 [2024-12-14 16:49:21.427099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.354 [2024-12-14 16:49:21.427106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.614 [2024-12-14 16:49:21.439334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.614 [2024-12-14 16:49:21.439757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.614 [2024-12-14 16:49:21.439776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.614 [2024-12-14 16:49:21.439783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.614 [2024-12-14 16:49:21.439953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.614 [2024-12-14 16:49:21.440122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.614 [2024-12-14 16:49:21.440131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.614 [2024-12-14 16:49:21.440138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.614 [2024-12-14 16:49:21.440145] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.614 [2024-12-14 16:49:21.452290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.614 [2024-12-14 16:49:21.452714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.614 [2024-12-14 16:49:21.452733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.615 [2024-12-14 16:49:21.452741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.615 [2024-12-14 16:49:21.452910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.615 [2024-12-14 16:49:21.453078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.615 [2024-12-14 16:49:21.453087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.615 [2024-12-14 16:49:21.453094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.615 [2024-12-14 16:49:21.453101] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.615 [2024-12-14 16:49:21.465270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.615 [2024-12-14 16:49:21.465688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.615 [2024-12-14 16:49:21.465706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.615 [2024-12-14 16:49:21.465713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.615 [2024-12-14 16:49:21.465883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.615 [2024-12-14 16:49:21.466051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.615 [2024-12-14 16:49:21.466060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.615 [2024-12-14 16:49:21.466066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.615 [2024-12-14 16:49:21.466073] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.615 [2024-12-14 16:49:21.478244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.615 [2024-12-14 16:49:21.478642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.615 [2024-12-14 16:49:21.478660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.615 [2024-12-14 16:49:21.478668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.615 [2024-12-14 16:49:21.478841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.615 [2024-12-14 16:49:21.479001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.615 [2024-12-14 16:49:21.479010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.615 [2024-12-14 16:49:21.479017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.615 [2024-12-14 16:49:21.479023] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.615 [2024-12-14 16:49:21.491173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.615 [2024-12-14 16:49:21.491590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.615 [2024-12-14 16:49:21.491608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.615 [2024-12-14 16:49:21.491621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.615 [2024-12-14 16:49:21.491791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.615 [2024-12-14 16:49:21.491960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.615 [2024-12-14 16:49:21.491969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.615 [2024-12-14 16:49:21.491976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.615 [2024-12-14 16:49:21.491982] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.615 [2024-12-14 16:49:21.504139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.615 [2024-12-14 16:49:21.504551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.615 [2024-12-14 16:49:21.504572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.615 [2024-12-14 16:49:21.504581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.615 [2024-12-14 16:49:21.504749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.615 [2024-12-14 16:49:21.504919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.615 [2024-12-14 16:49:21.504928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.615 [2024-12-14 16:49:21.504935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.615 [2024-12-14 16:49:21.504941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.615 [2024-12-14 16:49:21.517020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.615 [2024-12-14 16:49:21.517464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.615 [2024-12-14 16:49:21.517482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.615 [2024-12-14 16:49:21.517490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.615 [2024-12-14 16:49:21.517663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.615 [2024-12-14 16:49:21.517833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.615 [2024-12-14 16:49:21.517842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.615 [2024-12-14 16:49:21.517849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.615 [2024-12-14 16:49:21.517856] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.615 [2024-12-14 16:49:21.530009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.615 [2024-12-14 16:49:21.530404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.615 [2024-12-14 16:49:21.530422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.615 [2024-12-14 16:49:21.530430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.615 [2024-12-14 16:49:21.530606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.615 [2024-12-14 16:49:21.530779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.615 [2024-12-14 16:49:21.530789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.615 [2024-12-14 16:49:21.530795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.615 [2024-12-14 16:49:21.530802] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.615 [2024-12-14 16:49:21.542964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.615 [2024-12-14 16:49:21.543367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.615 [2024-12-14 16:49:21.543385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.615 [2024-12-14 16:49:21.543393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.615 [2024-12-14 16:49:21.543568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.615 [2024-12-14 16:49:21.543737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.615 [2024-12-14 16:49:21.543747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.615 [2024-12-14 16:49:21.543754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.615 [2024-12-14 16:49:21.543761] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.615 [2024-12-14 16:49:21.555918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.615 [2024-12-14 16:49:21.556318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.615 [2024-12-14 16:49:21.556335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.615 [2024-12-14 16:49:21.556343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.615 [2024-12-14 16:49:21.556511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.615 [2024-12-14 16:49:21.556686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.615 [2024-12-14 16:49:21.556696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.615 [2024-12-14 16:49:21.556703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.615 [2024-12-14 16:49:21.556710] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.615 [2024-12-14 16:49:21.568861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.615 [2024-12-14 16:49:21.569277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.615 [2024-12-14 16:49:21.569294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.615 [2024-12-14 16:49:21.569302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.615 [2024-12-14 16:49:21.569470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.615 [2024-12-14 16:49:21.569646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.615 [2024-12-14 16:49:21.569655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.615 [2024-12-14 16:49:21.569666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.615 [2024-12-14 16:49:21.569674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.615 [2024-12-14 16:49:21.581824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.615 [2024-12-14 16:49:21.582222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.615 [2024-12-14 16:49:21.582239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.616 [2024-12-14 16:49:21.582247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.616 [2024-12-14 16:49:21.582416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.616 [2024-12-14 16:49:21.582589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.616 [2024-12-14 16:49:21.582599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.616 [2024-12-14 16:49:21.582605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.616 [2024-12-14 16:49:21.582612] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.616 [2024-12-14 16:49:21.594769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.616 [2024-12-14 16:49:21.595170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.616 [2024-12-14 16:49:21.595188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.616 [2024-12-14 16:49:21.595196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.616 [2024-12-14 16:49:21.595365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.616 [2024-12-14 16:49:21.595533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.616 [2024-12-14 16:49:21.595543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.616 [2024-12-14 16:49:21.595549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.616 [2024-12-14 16:49:21.595562] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.616 [2024-12-14 16:49:21.607773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.616 [2024-12-14 16:49:21.608176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.616 [2024-12-14 16:49:21.608193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.616 [2024-12-14 16:49:21.608201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.616 [2024-12-14 16:49:21.608371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.616 [2024-12-14 16:49:21.608539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.616 [2024-12-14 16:49:21.608549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.616 [2024-12-14 16:49:21.608561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.616 [2024-12-14 16:49:21.608568] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.616 6270.60 IOPS, 24.49 MiB/s [2024-12-14T15:49:21.702Z] [2024-12-14 16:49:21.620754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.616 [2024-12-14 16:49:21.621164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.616 [2024-12-14 16:49:21.621182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.616 [2024-12-14 16:49:21.621190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.616 [2024-12-14 16:49:21.621358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.616 [2024-12-14 16:49:21.621526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.616 [2024-12-14 16:49:21.621535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.616 [2024-12-14 16:49:21.621542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.616 [2024-12-14 16:49:21.621548] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.616 [2024-12-14 16:49:21.633709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.616 [2024-12-14 16:49:21.634068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.616 [2024-12-14 16:49:21.634085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.616 [2024-12-14 16:49:21.634092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.616 [2024-12-14 16:49:21.634261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.616 [2024-12-14 16:49:21.634429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.616 [2024-12-14 16:49:21.634439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.616 [2024-12-14 16:49:21.634445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.616 [2024-12-14 16:49:21.634453] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.616 [2024-12-14 16:49:21.646729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.616 [2024-12-14 16:49:21.647054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.616 [2024-12-14 16:49:21.647072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.616 [2024-12-14 16:49:21.647080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.616 [2024-12-14 16:49:21.647249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.616 [2024-12-14 16:49:21.647417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.616 [2024-12-14 16:49:21.647427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.616 [2024-12-14 16:49:21.647434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.616 [2024-12-14 16:49:21.647441] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.616 [2024-12-14 16:49:21.659784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.616 [2024-12-14 16:49:21.660141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.616 [2024-12-14 16:49:21.660158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.616 [2024-12-14 16:49:21.660170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.616 [2024-12-14 16:49:21.660337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.616 [2024-12-14 16:49:21.660506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.616 [2024-12-14 16:49:21.660515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.616 [2024-12-14 16:49:21.660522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.616 [2024-12-14 16:49:21.660528] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.616 [2024-12-14 16:49:21.672785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.616 [2024-12-14 16:49:21.673179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.616 [2024-12-14 16:49:21.673197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.616 [2024-12-14 16:49:21.673204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.616 [2024-12-14 16:49:21.673372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.616 [2024-12-14 16:49:21.673541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.616 [2024-12-14 16:49:21.673550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.616 [2024-12-14 16:49:21.673562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.616 [2024-12-14 16:49:21.673569] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.616 [2024-12-14 16:49:21.685728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.616 [2024-12-14 16:49:21.686128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.616 [2024-12-14 16:49:21.686146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.616 [2024-12-14 16:49:21.686154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.616 [2024-12-14 16:49:21.686323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.616 [2024-12-14 16:49:21.686492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.616 [2024-12-14 16:49:21.686501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.616 [2024-12-14 16:49:21.686508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.616 [2024-12-14 16:49:21.686514] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.877 [2024-12-14 16:49:21.698862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.877 [2024-12-14 16:49:21.699266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.877 [2024-12-14 16:49:21.699284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.877 [2024-12-14 16:49:21.699292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.877 [2024-12-14 16:49:21.699465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.877 [2024-12-14 16:49:21.699647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.877 [2024-12-14 16:49:21.699658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.877 [2024-12-14 16:49:21.699664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.877 [2024-12-14 16:49:21.699671] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.877 [2024-12-14 16:49:21.711827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.877 [2024-12-14 16:49:21.712248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.877 [2024-12-14 16:49:21.712266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.877 [2024-12-14 16:49:21.712274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.877 [2024-12-14 16:49:21.712442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.877 [2024-12-14 16:49:21.712617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.877 [2024-12-14 16:49:21.712627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.877 [2024-12-14 16:49:21.712633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.877 [2024-12-14 16:49:21.712641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.877 [2024-12-14 16:49:21.724848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.877 [2024-12-14 16:49:21.725272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.877 [2024-12-14 16:49:21.725290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.877 [2024-12-14 16:49:21.725298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.877 [2024-12-14 16:49:21.725467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.877 [2024-12-14 16:49:21.725641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.877 [2024-12-14 16:49:21.725651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.877 [2024-12-14 16:49:21.725658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.877 [2024-12-14 16:49:21.725666] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.877 [2024-12-14 16:49:21.737857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.877 [2024-12-14 16:49:21.738206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.877 [2024-12-14 16:49:21.738223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.877 [2024-12-14 16:49:21.738231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.877 [2024-12-14 16:49:21.738399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.877 [2024-12-14 16:49:21.738572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.877 [2024-12-14 16:49:21.738582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.877 [2024-12-14 16:49:21.738593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.877 [2024-12-14 16:49:21.738601] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.877 [2024-12-14 16:49:21.750751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.877 [2024-12-14 16:49:21.751170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.877 [2024-12-14 16:49:21.751188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.877 [2024-12-14 16:49:21.751195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.877 [2024-12-14 16:49:21.751363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.877 [2024-12-14 16:49:21.751532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.877 [2024-12-14 16:49:21.751541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.877 [2024-12-14 16:49:21.751548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.877 [2024-12-14 16:49:21.751561] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.877 [2024-12-14 16:49:21.763730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.877 [2024-12-14 16:49:21.764148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.877 [2024-12-14 16:49:21.764165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.877 [2024-12-14 16:49:21.764173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.877 [2024-12-14 16:49:21.764342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.877 [2024-12-14 16:49:21.764510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.877 [2024-12-14 16:49:21.764519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.877 [2024-12-14 16:49:21.764526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.877 [2024-12-14 16:49:21.764533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.877 [2024-12-14 16:49:21.776677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.877 [2024-12-14 16:49:21.777081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.877 [2024-12-14 16:49:21.777099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.877 [2024-12-14 16:49:21.777107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.877 [2024-12-14 16:49:21.777276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.877 [2024-12-14 16:49:21.777444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.877 [2024-12-14 16:49:21.777453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.877 [2024-12-14 16:49:21.777460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.877 [2024-12-14 16:49:21.777466] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.877 [2024-12-14 16:49:21.789701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.877 [2024-12-14 16:49:21.790125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.877 [2024-12-14 16:49:21.790142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.877 [2024-12-14 16:49:21.790150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.877 [2024-12-14 16:49:21.790323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.877 [2024-12-14 16:49:21.790496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.877 [2024-12-14 16:49:21.790505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.877 [2024-12-14 16:49:21.790512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.877 [2024-12-14 16:49:21.790519] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.877 [2024-12-14 16:49:21.802633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.877 [2024-12-14 16:49:21.803052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.877 [2024-12-14 16:49:21.803070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.877 [2024-12-14 16:49:21.803078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.877 [2024-12-14 16:49:21.803247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.878 [2024-12-14 16:49:21.803415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.878 [2024-12-14 16:49:21.803425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.878 [2024-12-14 16:49:21.803431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.878 [2024-12-14 16:49:21.803438] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.878 [2024-12-14 16:49:21.815607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.878 [2024-12-14 16:49:21.816026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.878 [2024-12-14 16:49:21.816044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.878 [2024-12-14 16:49:21.816051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.878 [2024-12-14 16:49:21.816219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.878 [2024-12-14 16:49:21.816387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.878 [2024-12-14 16:49:21.816397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.878 [2024-12-14 16:49:21.816403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.878 [2024-12-14 16:49:21.816410] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.878 [2024-12-14 16:49:21.828606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.878 [2024-12-14 16:49:21.829035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.878 [2024-12-14 16:49:21.829053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.878 [2024-12-14 16:49:21.829064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.878 [2024-12-14 16:49:21.829233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.878 [2024-12-14 16:49:21.829401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.878 [2024-12-14 16:49:21.829411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.878 [2024-12-14 16:49:21.829417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.878 [2024-12-14 16:49:21.829425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.878 [2024-12-14 16:49:21.841496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.878 [2024-12-14 16:49:21.841914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.878 [2024-12-14 16:49:21.841932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.878 [2024-12-14 16:49:21.841940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.878 [2024-12-14 16:49:21.842108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.878 [2024-12-14 16:49:21.842277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.878 [2024-12-14 16:49:21.842286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.878 [2024-12-14 16:49:21.842292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.878 [2024-12-14 16:49:21.842299] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.878 [2024-12-14 16:49:21.854454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.878 [2024-12-14 16:49:21.854883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.878 [2024-12-14 16:49:21.854901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.878 [2024-12-14 16:49:21.854908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.878 [2024-12-14 16:49:21.855076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.878 [2024-12-14 16:49:21.855244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.878 [2024-12-14 16:49:21.855253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.878 [2024-12-14 16:49:21.855260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.878 [2024-12-14 16:49:21.855266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.878 [2024-12-14 16:49:21.867486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.878 [2024-12-14 16:49:21.867908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.878 [2024-12-14 16:49:21.867926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.878 [2024-12-14 16:49:21.867934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.878 [2024-12-14 16:49:21.868102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.878 [2024-12-14 16:49:21.868274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.878 [2024-12-14 16:49:21.868284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.878 [2024-12-14 16:49:21.868290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.878 [2024-12-14 16:49:21.868297] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.878 [2024-12-14 16:49:21.880462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.878 [2024-12-14 16:49:21.880794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.878 [2024-12-14 16:49:21.880812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.878 [2024-12-14 16:49:21.880821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.878 [2024-12-14 16:49:21.880991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.878 [2024-12-14 16:49:21.881159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.878 [2024-12-14 16:49:21.881168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.878 [2024-12-14 16:49:21.881174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.878 [2024-12-14 16:49:21.881181] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.878 [2024-12-14 16:49:21.893357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.878 [2024-12-14 16:49:21.893699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.878 [2024-12-14 16:49:21.893717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.878 [2024-12-14 16:49:21.893725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.878 [2024-12-14 16:49:21.893893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.878 [2024-12-14 16:49:21.894062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.878 [2024-12-14 16:49:21.894072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.878 [2024-12-14 16:49:21.894078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.878 [2024-12-14 16:49:21.894084] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.878 [2024-12-14 16:49:21.906365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.878 [2024-12-14 16:49:21.906740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.878 [2024-12-14 16:49:21.906759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.878 [2024-12-14 16:49:21.906767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.878 [2024-12-14 16:49:21.906941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.878 [2024-12-14 16:49:21.907115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.878 [2024-12-14 16:49:21.907127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.878 [2024-12-14 16:49:21.907138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.878 [2024-12-14 16:49:21.907147] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.878 [2024-12-14 16:49:21.919427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.878 [2024-12-14 16:49:21.919791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.878 [2024-12-14 16:49:21.919809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.878 [2024-12-14 16:49:21.919817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.878 [2024-12-14 16:49:21.919991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.878 [2024-12-14 16:49:21.920165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.878 [2024-12-14 16:49:21.920174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.878 [2024-12-14 16:49:21.920181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.878 [2024-12-14 16:49:21.920188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.878 [2024-12-14 16:49:21.932456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.878 [2024-12-14 16:49:21.932797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.878 [2024-12-14 16:49:21.932814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.878 [2024-12-14 16:49:21.932822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.878 [2024-12-14 16:49:21.932990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.879 [2024-12-14 16:49:21.933158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.879 [2024-12-14 16:49:21.933168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.879 [2024-12-14 16:49:21.933174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.879 [2024-12-14 16:49:21.933181] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.879 [2024-12-14 16:49:21.945352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.879 [2024-12-14 16:49:21.945707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.879 [2024-12-14 16:49:21.945725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.879 [2024-12-14 16:49:21.945732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.879 [2024-12-14 16:49:21.945901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.879 [2024-12-14 16:49:21.946070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.879 [2024-12-14 16:49:21.946079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.879 [2024-12-14 16:49:21.946086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.879 [2024-12-14 16:49:21.946092] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:51.879 [2024-12-14 16:49:21.958401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:51.879 [2024-12-14 16:49:21.958810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.879 [2024-12-14 16:49:21.958828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:51.879 [2024-12-14 16:49:21.958835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:51.879 [2024-12-14 16:49:21.959009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:51.879 [2024-12-14 16:49:21.959182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:51.879 [2024-12-14 16:49:21.959192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:51.879 [2024-12-14 16:49:21.959198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:51.879 [2024-12-14 16:49:21.959205] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.139 [2024-12-14 16:49:21.971318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.139 [2024-12-14 16:49:21.971715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-14 16:49:21.971733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.139 [2024-12-14 16:49:21.971741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.139 [2024-12-14 16:49:21.971910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.139 [2024-12-14 16:49:21.972079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.139 [2024-12-14 16:49:21.972088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.139 [2024-12-14 16:49:21.972095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.139 [2024-12-14 16:49:21.972102] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.139 [2024-12-14 16:49:21.984326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.139 [2024-12-14 16:49:21.984647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-14 16:49:21.984666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.139 [2024-12-14 16:49:21.984674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.139 [2024-12-14 16:49:21.984841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.139 [2024-12-14 16:49:21.985010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.139 [2024-12-14 16:49:21.985019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.139 [2024-12-14 16:49:21.985026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.139 [2024-12-14 16:49:21.985032] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.139 [2024-12-14 16:49:21.997308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.139 [2024-12-14 16:49:21.997713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-14 16:49:21.997731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.139 [2024-12-14 16:49:21.997743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.139 [2024-12-14 16:49:21.997912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.139 [2024-12-14 16:49:21.998082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.139 [2024-12-14 16:49:21.998092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.139 [2024-12-14 16:49:21.998098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.139 [2024-12-14 16:49:21.998105] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.139 [2024-12-14 16:49:22.010284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.139 [2024-12-14 16:49:22.010724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.139 [2024-12-14 16:49:22.010742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.139 [2024-12-14 16:49:22.010750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.139 [2024-12-14 16:49:22.010918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.139 [2024-12-14 16:49:22.011086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.139 [2024-12-14 16:49:22.011096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.139 [2024-12-14 16:49:22.011103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.139 [2024-12-14 16:49:22.011110] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.139 [2024-12-14 16:49:22.023195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.139 [2024-12-14 16:49:22.023616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-14 16:49:22.023635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.140 [2024-12-14 16:49:22.023643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.140 [2024-12-14 16:49:22.023811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.140 [2024-12-14 16:49:22.023980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.140 [2024-12-14 16:49:22.023989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.140 [2024-12-14 16:49:22.023996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.140 [2024-12-14 16:49:22.024003] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.140 [2024-12-14 16:49:22.036216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.140 [2024-12-14 16:49:22.036689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-14 16:49:22.036708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.140 [2024-12-14 16:49:22.036716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.140 [2024-12-14 16:49:22.036884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.140 [2024-12-14 16:49:22.037056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.140 [2024-12-14 16:49:22.037066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.140 [2024-12-14 16:49:22.037072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.140 [2024-12-14 16:49:22.037079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.140 [2024-12-14 16:49:22.049147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.140 [2024-12-14 16:49:22.049504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-14 16:49:22.049523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.140 [2024-12-14 16:49:22.049531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.140 [2024-12-14 16:49:22.049707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.140 [2024-12-14 16:49:22.049876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.140 [2024-12-14 16:49:22.049885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.140 [2024-12-14 16:49:22.049892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.140 [2024-12-14 16:49:22.049899] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.140 [2024-12-14 16:49:22.062097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.140 [2024-12-14 16:49:22.062452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-14 16:49:22.062470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.140 [2024-12-14 16:49:22.062478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.140 [2024-12-14 16:49:22.062652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.140 [2024-12-14 16:49:22.062822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.140 [2024-12-14 16:49:22.062832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.140 [2024-12-14 16:49:22.062838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.140 [2024-12-14 16:49:22.062846] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.140 [2024-12-14 16:49:22.074971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.140 [2024-12-14 16:49:22.075366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-14 16:49:22.075383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.140 [2024-12-14 16:49:22.075391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.140 [2024-12-14 16:49:22.075550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.140 [2024-12-14 16:49:22.075716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.140 [2024-12-14 16:49:22.075726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.140 [2024-12-14 16:49:22.075736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.140 [2024-12-14 16:49:22.075742] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.140 [2024-12-14 16:49:22.087901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.140 [2024-12-14 16:49:22.088293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-14 16:49:22.088310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.140 [2024-12-14 16:49:22.088318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.140 [2024-12-14 16:49:22.088477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.140 [2024-12-14 16:49:22.088645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.140 [2024-12-14 16:49:22.088655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.140 [2024-12-14 16:49:22.088661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.140 [2024-12-14 16:49:22.088669] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.140 [2024-12-14 16:49:22.100869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.140 [2024-12-14 16:49:22.101286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-14 16:49:22.101304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.140 [2024-12-14 16:49:22.101311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.140 [2024-12-14 16:49:22.101470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.140 [2024-12-14 16:49:22.101637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.140 [2024-12-14 16:49:22.101647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.140 [2024-12-14 16:49:22.101653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.140 [2024-12-14 16:49:22.101661] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.140 [2024-12-14 16:49:22.113645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.140 [2024-12-14 16:49:22.114044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-14 16:49:22.114062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.140 [2024-12-14 16:49:22.114070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.140 [2024-12-14 16:49:22.114239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.140 [2024-12-14 16:49:22.114407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.140 [2024-12-14 16:49:22.114416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.140 [2024-12-14 16:49:22.114423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.140 [2024-12-14 16:49:22.114430] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.140 [2024-12-14 16:49:22.126488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.140 [2024-12-14 16:49:22.126910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-14 16:49:22.126956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.140 [2024-12-14 16:49:22.126980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.140 [2024-12-14 16:49:22.127569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.140 [2024-12-14 16:49:22.127732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.140 [2024-12-14 16:49:22.127741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.140 [2024-12-14 16:49:22.127747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.140 [2024-12-14 16:49:22.127753] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.140 [2024-12-14 16:49:22.139408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.140 [2024-12-14 16:49:22.139743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-14 16:49:22.139760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.140 [2024-12-14 16:49:22.139767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.140 [2024-12-14 16:49:22.139927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.140 [2024-12-14 16:49:22.140086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.140 [2024-12-14 16:49:22.140095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.140 [2024-12-14 16:49:22.140101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.140 [2024-12-14 16:49:22.140108] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.140 [2024-12-14 16:49:22.152274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.140 [2024-12-14 16:49:22.152636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.140 [2024-12-14 16:49:22.152682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.140 [2024-12-14 16:49:22.152706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.141 [2024-12-14 16:49:22.153289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.141 [2024-12-14 16:49:22.153645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.141 [2024-12-14 16:49:22.153656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.141 [2024-12-14 16:49:22.153662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.141 [2024-12-14 16:49:22.153670] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.141 [2024-12-14 16:49:22.165248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.141 [2024-12-14 16:49:22.165524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-14 16:49:22.165585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.141 [2024-12-14 16:49:22.165617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.141 [2024-12-14 16:49:22.166078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.141 [2024-12-14 16:49:22.166247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.141 [2024-12-14 16:49:22.166257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.141 [2024-12-14 16:49:22.166263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.141 [2024-12-14 16:49:22.166271] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.141 [2024-12-14 16:49:22.178325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.141 [2024-12-14 16:49:22.178612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-14 16:49:22.178630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.141 [2024-12-14 16:49:22.178638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.141 [2024-12-14 16:49:22.178805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.141 [2024-12-14 16:49:22.178974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.141 [2024-12-14 16:49:22.178983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.141 [2024-12-14 16:49:22.178990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.141 [2024-12-14 16:49:22.178998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.141 [2024-12-14 16:49:22.191349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.141 [2024-12-14 16:49:22.191726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-14 16:49:22.191745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.141 [2024-12-14 16:49:22.191752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.141 [2024-12-14 16:49:22.191920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.141 [2024-12-14 16:49:22.192089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.141 [2024-12-14 16:49:22.192099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.141 [2024-12-14 16:49:22.192105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.141 [2024-12-14 16:49:22.192112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.141 [2024-12-14 16:49:22.204351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.141 [2024-12-14 16:49:22.204689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-14 16:49:22.204706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.141 [2024-12-14 16:49:22.204713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.141 [2024-12-14 16:49:22.204872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.141 [2024-12-14 16:49:22.205035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.141 [2024-12-14 16:49:22.205044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.141 [2024-12-14 16:49:22.205050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.141 [2024-12-14 16:49:22.205057] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.141 [2024-12-14 16:49:22.217287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.141 [2024-12-14 16:49:22.217654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.141 [2024-12-14 16:49:22.217672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.141 [2024-12-14 16:49:22.217680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.141 [2024-12-14 16:49:22.217840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.141 [2024-12-14 16:49:22.217999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.141 [2024-12-14 16:49:22.218008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.141 [2024-12-14 16:49:22.218014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.141 [2024-12-14 16:49:22.218021] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.401 [2024-12-14 16:49:22.230388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.401 [2024-12-14 16:49:22.230736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.401 [2024-12-14 16:49:22.230755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.401 [2024-12-14 16:49:22.230763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.401 [2024-12-14 16:49:22.230931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.401 [2024-12-14 16:49:22.231100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.401 [2024-12-14 16:49:22.231109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.401 [2024-12-14 16:49:22.231115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.401 [2024-12-14 16:49:22.231122] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.401 [2024-12-14 16:49:22.243199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.401 [2024-12-14 16:49:22.243595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.401 [2024-12-14 16:49:22.243613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.401 [2024-12-14 16:49:22.243620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.401 [2024-12-14 16:49:22.243790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.401 [2024-12-14 16:49:22.243958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.402 [2024-12-14 16:49:22.243967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.402 [2024-12-14 16:49:22.243978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.402 [2024-12-14 16:49:22.243985] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.402 [2024-12-14 16:49:22.256120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.402 [2024-12-14 16:49:22.256506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.402 [2024-12-14 16:49:22.256523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.402 [2024-12-14 16:49:22.256531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.402 [2024-12-14 16:49:22.256719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.402 [2024-12-14 16:49:22.256888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.402 [2024-12-14 16:49:22.256898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.402 [2024-12-14 16:49:22.256905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.402 [2024-12-14 16:49:22.256912] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.402 [2024-12-14 16:49:22.268968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.402 [2024-12-14 16:49:22.269306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.402 [2024-12-14 16:49:22.269323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.402 [2024-12-14 16:49:22.269330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.402 [2024-12-14 16:49:22.269491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.402 [2024-12-14 16:49:22.269678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.402 [2024-12-14 16:49:22.269688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.402 [2024-12-14 16:49:22.269694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.402 [2024-12-14 16:49:22.269701] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.402 [2024-12-14 16:49:22.281994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.402 [2024-12-14 16:49:22.282393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.402 [2024-12-14 16:49:22.282438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.402 [2024-12-14 16:49:22.282462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.402 [2024-12-14 16:49:22.282875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.402 [2024-12-14 16:49:22.283037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.402 [2024-12-14 16:49:22.283046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.402 [2024-12-14 16:49:22.283052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.402 [2024-12-14 16:49:22.283059] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.402 [2024-12-14 16:49:22.294911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.402 [2024-12-14 16:49:22.295261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.402 [2024-12-14 16:49:22.295305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.402 [2024-12-14 16:49:22.295329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.402 [2024-12-14 16:49:22.295883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.402 [2024-12-14 16:49:22.296045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.402 [2024-12-14 16:49:22.296054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.402 [2024-12-14 16:49:22.296060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.402 [2024-12-14 16:49:22.296067] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.402 [2024-12-14 16:49:22.307743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.402 [2024-12-14 16:49:22.308032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.402 [2024-12-14 16:49:22.308049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.402 [2024-12-14 16:49:22.308056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.402 [2024-12-14 16:49:22.308216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.402 [2024-12-14 16:49:22.308376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.402 [2024-12-14 16:49:22.308385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.402 [2024-12-14 16:49:22.308391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.402 [2024-12-14 16:49:22.308398] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1202835 Killed "${NVMF_APP[@]}" "$@" 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.402 [2024-12-14 16:49:22.320783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.402 [2024-12-14 16:49:22.321145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.402 [2024-12-14 16:49:22.321163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.402 [2024-12-14 16:49:22.321171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.402 [2024-12-14 16:49:22.321344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.402 [2024-12-14 16:49:22.321518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.402 [2024-12-14 16:49:22.321528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.402 [2024-12-14 16:49:22.321539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.402 [2024-12-14 16:49:22.321547] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1204191 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1204191 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1204191 ']' 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:52.402 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.402 [2024-12-14 16:49:22.333899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.402 [2024-12-14 16:49:22.334312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.402 [2024-12-14 16:49:22.334328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.402 [2024-12-14 16:49:22.334335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.402 [2024-12-14 16:49:22.334508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.402 [2024-12-14 16:49:22.334689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.402 [2024-12-14 16:49:22.334698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.402 [2024-12-14 16:49:22.334705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.402 [2024-12-14 16:49:22.334713] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.402 [2024-12-14 16:49:22.346919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.402 [2024-12-14 16:49:22.347289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.402 [2024-12-14 16:49:22.347306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.402 [2024-12-14 16:49:22.347313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.402 [2024-12-14 16:49:22.347486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.402 [2024-12-14 16:49:22.347665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.402 [2024-12-14 16:49:22.347674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.402 [2024-12-14 16:49:22.347681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.402 [2024-12-14 16:49:22.347687] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.402 [2024-12-14 16:49:22.359901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.402 [2024-12-14 16:49:22.360306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.402 [2024-12-14 16:49:22.360323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.403 [2024-12-14 16:49:22.360330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.403 [2024-12-14 16:49:22.360503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.403 [2024-12-14 16:49:22.360683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.403 [2024-12-14 16:49:22.360692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.403 [2024-12-14 16:49:22.360699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.403 [2024-12-14 16:49:22.360706] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.403 [2024-12-14 16:49:22.371717] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:52.403 [2024-12-14 16:49:22.371755] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:52.403 [2024-12-14 16:49:22.373081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.403 [2024-12-14 16:49:22.373479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.403 [2024-12-14 16:49:22.373496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.403 [2024-12-14 16:49:22.373503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.403 [2024-12-14 16:49:22.373676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.403 [2024-12-14 16:49:22.373845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.403 [2024-12-14 16:49:22.373853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.403 [2024-12-14 16:49:22.373860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.403 [2024-12-14 16:49:22.373868] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.403 [2024-12-14 16:49:22.386032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.403 [2024-12-14 16:49:22.386375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.403 [2024-12-14 16:49:22.386392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.403 [2024-12-14 16:49:22.386400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.403 [2024-12-14 16:49:22.386573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.403 [2024-12-14 16:49:22.386764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.403 [2024-12-14 16:49:22.386772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.403 [2024-12-14 16:49:22.386779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.403 [2024-12-14 16:49:22.386786] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.403 [2024-12-14 16:49:22.398964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.403 [2024-12-14 16:49:22.399300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.403 [2024-12-14 16:49:22.399316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.403 [2024-12-14 16:49:22.399324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.403 [2024-12-14 16:49:22.399492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.403 [2024-12-14 16:49:22.399684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.403 [2024-12-14 16:49:22.399693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.403 [2024-12-14 16:49:22.399699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.403 [2024-12-14 16:49:22.399705] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.403 [2024-12-14 16:49:22.411946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.403 [2024-12-14 16:49:22.412363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.403 [2024-12-14 16:49:22.412381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.403 [2024-12-14 16:49:22.412389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.403 [2024-12-14 16:49:22.412567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.403 [2024-12-14 16:49:22.412741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.403 [2024-12-14 16:49:22.412749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.403 [2024-12-14 16:49:22.412755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.403 [2024-12-14 16:49:22.412762] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.403 [2024-12-14 16:49:22.424915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.403 [2024-12-14 16:49:22.425329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.403 [2024-12-14 16:49:22.425346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.403 [2024-12-14 16:49:22.425354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.403 [2024-12-14 16:49:22.425523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.403 [2024-12-14 16:49:22.425715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.403 [2024-12-14 16:49:22.425724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.403 [2024-12-14 16:49:22.425731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.403 [2024-12-14 16:49:22.425737] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.403 [2024-12-14 16:49:22.437933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.403 [2024-12-14 16:49:22.438340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.403 [2024-12-14 16:49:22.438357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.403 [2024-12-14 16:49:22.438367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.403 [2024-12-14 16:49:22.438535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.403 [2024-12-14 16:49:22.438733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.403 [2024-12-14 16:49:22.438743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.403 [2024-12-14 16:49:22.438748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.403 [2024-12-14 16:49:22.438755] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.403 [2024-12-14 16:49:22.450837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.403 [2024-12-14 16:49:22.451162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.403 [2024-12-14 16:49:22.451178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.403 [2024-12-14 16:49:22.451186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.403 [2024-12-14 16:49:22.451355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.403 [2024-12-14 16:49:22.451523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.403 [2024-12-14 16:49:22.451532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.403 [2024-12-14 16:49:22.451538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.403 [2024-12-14 16:49:22.451545] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.403 [2024-12-14 16:49:22.451859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:52.403 [2024-12-14 16:49:22.463728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.403 [2024-12-14 16:49:22.464144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.403 [2024-12-14 16:49:22.464162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.403 [2024-12-14 16:49:22.464170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.403 [2024-12-14 16:49:22.464339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.403 [2024-12-14 16:49:22.464508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.403 [2024-12-14 16:49:22.464517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.403 [2024-12-14 16:49:22.464523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.403 [2024-12-14 16:49:22.464530] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.403 [2024-12-14 16:49:22.473181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:52.403 [2024-12-14 16:49:22.473210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:52.403 [2024-12-14 16:49:22.473217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:52.403 [2024-12-14 16:49:22.473223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:52.403 [2024-12-14 16:49:22.473229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:52.403 [2024-12-14 16:49:22.474380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:52.403 [2024-12-14 16:49:22.474491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.403 [2024-12-14 16:49:22.474492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:35:52.403 [2024-12-14 16:49:22.476829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.403 [2024-12-14 16:49:22.477287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.403 [2024-12-14 16:49:22.477307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.403 [2024-12-14 16:49:22.477316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.404 [2024-12-14 16:49:22.477491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.404 [2024-12-14 16:49:22.477672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.404 [2024-12-14 16:49:22.477681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.404 [2024-12-14 16:49:22.477688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.404 [2024-12-14 16:49:22.477696] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.663 [2024-12-14 16:49:22.489899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.663 [2024-12-14 16:49:22.490307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.663 [2024-12-14 16:49:22.490327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.663 [2024-12-14 16:49:22.490336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.663 [2024-12-14 16:49:22.490509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.663 [2024-12-14 16:49:22.490692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.663 [2024-12-14 16:49:22.490701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.663 [2024-12-14 16:49:22.490708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.663 [2024-12-14 16:49:22.490717] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.663 [2024-12-14 16:49:22.502929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.663 [2024-12-14 16:49:22.503366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.663 [2024-12-14 16:49:22.503387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.663 [2024-12-14 16:49:22.503396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.663 [2024-12-14 16:49:22.503578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.663 [2024-12-14 16:49:22.503753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.663 [2024-12-14 16:49:22.503762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.663 [2024-12-14 16:49:22.503769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.663 [2024-12-14 16:49:22.503777] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.663 [2024-12-14 16:49:22.515979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.663 [2024-12-14 16:49:22.516422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.663 [2024-12-14 16:49:22.516441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.663 [2024-12-14 16:49:22.516450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.663 [2024-12-14 16:49:22.516630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.663 [2024-12-14 16:49:22.516804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.663 [2024-12-14 16:49:22.516812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.663 [2024-12-14 16:49:22.516819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.663 [2024-12-14 16:49:22.516827] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.663 [2024-12-14 16:49:22.529031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.663 [2024-12-14 16:49:22.529461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.663 [2024-12-14 16:49:22.529483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.663 [2024-12-14 16:49:22.529492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.663 [2024-12-14 16:49:22.529672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.663 [2024-12-14 16:49:22.529847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.663 [2024-12-14 16:49:22.529855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.663 [2024-12-14 16:49:22.529862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.663 [2024-12-14 16:49:22.529870] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.663 [2024-12-14 16:49:22.542060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.663 [2024-12-14 16:49:22.542472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.663 [2024-12-14 16:49:22.542490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.663 [2024-12-14 16:49:22.542498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.663 [2024-12-14 16:49:22.542676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.663 [2024-12-14 16:49:22.542852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.663 [2024-12-14 16:49:22.542860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.663 [2024-12-14 16:49:22.542867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.663 [2024-12-14 16:49:22.542874] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.663 [2024-12-14 16:49:22.555071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.663 [2024-12-14 16:49:22.555482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.664 [2024-12-14 16:49:22.555499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.664 [2024-12-14 16:49:22.555512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.664 [2024-12-14 16:49:22.555690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.664 [2024-12-14 16:49:22.555865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.664 [2024-12-14 16:49:22.555873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.664 [2024-12-14 16:49:22.555880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.664 [2024-12-14 16:49:22.555887] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.664 [2024-12-14 16:49:22.568077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.664 [2024-12-14 16:49:22.568503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.664 [2024-12-14 16:49:22.568519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.664 [2024-12-14 16:49:22.568527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.664 [2024-12-14 16:49:22.568704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.664 [2024-12-14 16:49:22.568878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.664 [2024-12-14 16:49:22.568887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.664 [2024-12-14 16:49:22.568894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.664 [2024-12-14 16:49:22.568900] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.664 [2024-12-14 16:49:22.581097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.664 [2024-12-14 16:49:22.581523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.664 [2024-12-14 16:49:22.581540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.664 [2024-12-14 16:49:22.581548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.664 [2024-12-14 16:49:22.581725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.664 [2024-12-14 16:49:22.581900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.664 [2024-12-14 16:49:22.581909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.664 [2024-12-14 16:49:22.581916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.664 [2024-12-14 16:49:22.581923] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.664 [2024-12-14 16:49:22.594116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.664 [2024-12-14 16:49:22.594537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.664 [2024-12-14 16:49:22.594562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.664 [2024-12-14 16:49:22.594570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.664 [2024-12-14 16:49:22.594744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.664 [2024-12-14 16:49:22.594918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.664 [2024-12-14 16:49:22.594926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.664 [2024-12-14 16:49:22.594933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.664 [2024-12-14 16:49:22.594939] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.664 [2024-12-14 16:49:22.607160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.664 [2024-12-14 16:49:22.607591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.664 [2024-12-14 16:49:22.607609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.664 [2024-12-14 16:49:22.607616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.664 [2024-12-14 16:49:22.607789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.664 [2024-12-14 16:49:22.607963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.664 [2024-12-14 16:49:22.607972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.664 [2024-12-14 16:49:22.607978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.664 [2024-12-14 16:49:22.607984] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.664 5225.50 IOPS, 20.41 MiB/s [2024-12-14T15:49:22.750Z] [2024-12-14 16:49:22.617616] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:52.664 [2024-12-14 16:49:22.620190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.664 [2024-12-14 16:49:22.620530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.664 [2024-12-14 16:49:22.620547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.664 [2024-12-14 16:49:22.620554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.664 [2024-12-14 16:49:22.620733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.664 [2024-12-14 16:49:22.620907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.664 [2024-12-14 16:49:22.620915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.664 [2024-12-14 16:49:22.620922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.664 [2024-12-14 16:49:22.620928] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.664 [2024-12-14 16:49:22.633297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.664 [2024-12-14 16:49:22.633632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.664 [2024-12-14 16:49:22.633650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.664 [2024-12-14 16:49:22.633657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.664 [2024-12-14 16:49:22.633831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.664 [2024-12-14 16:49:22.634004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.664 [2024-12-14 16:49:22.634012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.664 [2024-12-14 16:49:22.634018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.664 [2024-12-14 16:49:22.634025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.664 [2024-12-14 16:49:22.646390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.664 [2024-12-14 16:49:22.646827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.664 [2024-12-14 16:49:22.646844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.664 [2024-12-14 16:49:22.646851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.664 [2024-12-14 16:49:22.647025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.664 [2024-12-14 16:49:22.647198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.664 [2024-12-14 16:49:22.647206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.664 [2024-12-14 16:49:22.647212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.664 [2024-12-14 16:49:22.647219] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.664 [2024-12-14 16:49:22.659433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.664 [2024-12-14 16:49:22.659823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.664 [2024-12-14 16:49:22.659841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.664 [2024-12-14 16:49:22.659848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.664 [2024-12-14 16:49:22.660022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.664 [2024-12-14 16:49:22.660195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.664 [2024-12-14 16:49:22.660204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.664 [2024-12-14 16:49:22.660210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.664 [2024-12-14 16:49:22.660217] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.664 Malloc0 00:35:52.664 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.665 [2024-12-14 16:49:22.672417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.665 [2024-12-14 16:49:22.672755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.665 [2024-12-14 16:49:22.672772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.665 [2024-12-14 16:49:22.672779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.665 [2024-12-14 16:49:22.672952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.665 [2024-12-14 16:49:22.673126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.665 [2024-12-14 16:49:22.673134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.665 [2024-12-14 16:49:22.673140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.665 [2024-12-14 16:49:22.673147] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.665 [2024-12-14 16:49:22.685503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.665 [2024-12-14 16:49:22.685978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.665 [2024-12-14 16:49:22.685996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bd490 with addr=10.0.0.2, port=4420 00:35:52.665 [2024-12-14 16:49:22.686003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bd490 is same with the state(6) to be set 00:35:52.665 [2024-12-14 16:49:22.686176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bd490 (9): Bad file descriptor 00:35:52.665 [2024-12-14 16:49:22.686348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:52.665 [2024-12-14 16:49:22.686356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:52.665 [2024-12-14 16:49:22.686363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:52.665 [2024-12-14 16:49:22.686370] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.665 [2024-12-14 16:49:22.694407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.665 [2024-12-14 16:49:22.698554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.665 16:49:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1203146 00:35:52.922 [2024-12-14 16:49:22.767500] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:54.788 5859.71 IOPS, 22.89 MiB/s [2024-12-14T15:49:25.805Z] 6572.38 IOPS, 25.67 MiB/s [2024-12-14T15:49:26.737Z] 7109.56 IOPS, 27.77 MiB/s [2024-12-14T15:49:27.670Z] 7551.80 IOPS, 29.50 MiB/s [2024-12-14T15:49:29.042Z] 7919.18 IOPS, 30.93 MiB/s [2024-12-14T15:49:29.975Z] 8216.83 IOPS, 32.10 MiB/s [2024-12-14T15:49:30.906Z] 8465.23 IOPS, 33.07 MiB/s [2024-12-14T15:49:31.839Z] 8677.43 IOPS, 33.90 MiB/s [2024-12-14T15:49:31.839Z] 8872.00 IOPS, 34.66 MiB/s 00:36:01.753 Latency(us) 00:36:01.753 [2024-12-14T15:49:31.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.753 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:01.753 Verification LBA range: start 0x0 length 0x4000 00:36:01.753 Nvme1n1 : 15.01 8869.43 34.65 11162.05 0.00 6369.88 698.27 14043.43 00:36:01.753 [2024-12-14T15:49:31.839Z] =================================================================================================================== 00:36:01.753 [2024-12-14T15:49:31.839Z] Total : 8869.43 34.65 11162.05 0.00 6369.88 698.27 14043.43 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:01.753 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:01.753 rmmod nvme_tcp 00:36:02.012 rmmod nvme_fabrics 00:36:02.012 rmmod nvme_keyring 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1204191 ']' 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1204191 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1204191 ']' 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1204191 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1204191 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1204191' 00:36:02.012 killing process with pid 1204191 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1204191 00:36:02.012 16:49:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1204191 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.271 16:49:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.176 16:49:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:04.176 00:36:04.176 real 0m25.870s 00:36:04.176 user 1m0.228s 00:36:04.176 sys 0m6.758s 00:36:04.176 16:49:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:04.176 16:49:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.176 ************************************ 00:36:04.176 END TEST nvmf_bdevperf 00:36:04.176 ************************************ 00:36:04.176 16:49:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:04.176 16:49:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:04.176 16:49:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:04.176 16:49:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.435 ************************************ 00:36:04.436 START TEST nvmf_target_disconnect 00:36:04.436 ************************************ 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:04.436 * Looking for test storage... 00:36:04.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:04.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.436 --rc genhtml_branch_coverage=1 00:36:04.436 --rc genhtml_function_coverage=1 00:36:04.436 --rc genhtml_legend=1 00:36:04.436 --rc geninfo_all_blocks=1 00:36:04.436 --rc geninfo_unexecuted_blocks=1 00:36:04.436 00:36:04.436 ' 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:04.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.436 --rc genhtml_branch_coverage=1 00:36:04.436 --rc genhtml_function_coverage=1 00:36:04.436 --rc genhtml_legend=1 00:36:04.436 --rc geninfo_all_blocks=1 00:36:04.436 --rc geninfo_unexecuted_blocks=1 00:36:04.436 00:36:04.436 ' 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:04.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.436 --rc genhtml_branch_coverage=1 00:36:04.436 --rc genhtml_function_coverage=1 00:36:04.436 --rc genhtml_legend=1 00:36:04.436 --rc geninfo_all_blocks=1 00:36:04.436 --rc geninfo_unexecuted_blocks=1 00:36:04.436 00:36:04.436 ' 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:04.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.436 --rc genhtml_branch_coverage=1 00:36:04.436 --rc genhtml_function_coverage=1 00:36:04.436 --rc genhtml_legend=1 00:36:04.436 --rc geninfo_all_blocks=1 00:36:04.436 --rc geninfo_unexecuted_blocks=1 00:36:04.436 00:36:04.436 ' 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:04.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:04.436 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:04.437 16:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:11.007 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:11.007 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:11.007 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:11.008 Found net devices under 0000:af:00.0: cvl_0_0 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:11.008 Found net devices under 0000:af:00.1: cvl_0_1 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:11.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:11.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:36:11.008 00:36:11.008 --- 10.0.0.2 ping statistics --- 00:36:11.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.008 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:11.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:11.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:36:11.008 00:36:11.008 --- 10.0.0.1 ping statistics --- 00:36:11.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.008 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:11.008 ************************************ 00:36:11.008 START TEST nvmf_target_disconnect_tc1 00:36:11.008 ************************************ 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:11.008 [2024-12-14 16:49:40.537591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-12-14 16:49:40.537647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206dc50 with addr=10.0.0.2, port=4420 00:36:11.008 [2024-12-14 16:49:40.537672] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:11.008 [2024-12-14 16:49:40.537687] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:11.008 [2024-12-14 16:49:40.537694] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:11.008 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:11.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:11.008 Initializing NVMe Controllers 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:11.008 00:36:11.008 real 0m0.121s 00:36:11.008 user 0m0.059s 00:36:11.008 sys 0m0.061s 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.008 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:11.009 ************************************ 00:36:11.009 END TEST nvmf_target_disconnect_tc1 00:36:11.009 ************************************ 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:11.009 ************************************ 00:36:11.009 START TEST nvmf_target_disconnect_tc2 00:36:11.009 ************************************ 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1209093 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1209093 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1209093 ']' 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.009 [2024-12-14 16:49:40.676449] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:11.009 [2024-12-14 16:49:40.676493] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:11.009 [2024-12-14 16:49:40.756480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:11.009 [2024-12-14 16:49:40.779965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:11.009 [2024-12-14 16:49:40.780002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:11.009 [2024-12-14 16:49:40.780009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:11.009 [2024-12-14 16:49:40.780015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:11.009 [2024-12-14 16:49:40.780020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:11.009 [2024-12-14 16:49:40.781430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:11.009 [2024-12-14 16:49:40.781540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:11.009 [2024-12-14 16:49:40.781645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:11.009 [2024-12-14 16:49:40.781646] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.009 Malloc0 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.009 [2024-12-14 16:49:40.947435] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.009 [2024-12-14 16:49:40.976545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1209297 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:11.009 16:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:13.564 16:49:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1209093 00:36:13.564 16:49:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Write completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Write completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Write completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Write completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Write completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Write completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Write completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.564 Read completed with error (sct=0, sc=8) 00:36:13.564 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 [2024-12-14 16:49:43.008230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 [2024-12-14 16:49:43.008439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 [2024-12-14 16:49:43.008656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Read completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 Write completed with error (sct=0, sc=8) 00:36:13.565 starting I/O failed 00:36:13.565 [2024-12-14 16:49:43.008853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:13.565 [2024-12-14 16:49:43.009095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.565 [2024-12-14 16:49:43.009118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.565 qpair failed and we were unable to recover it. 00:36:13.565 [2024-12-14 16:49:43.009285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.565 [2024-12-14 16:49:43.009297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.565 qpair failed and we were unable to recover it. 00:36:13.565 [2024-12-14 16:49:43.009391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.009402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.009542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.009553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.009703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.009715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.009859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.009870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.009938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.009948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.010020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.010030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.010160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.010170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.010252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.010265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.010350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.010360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.010499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.010510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.010574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.010584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.010712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.010722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.010785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.010795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.010878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.010888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.010971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.010981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.011127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.011136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.011217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.011227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.011386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.011397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.011454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.011464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.011540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.011550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.011633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.011644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.011773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.011783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.011908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.011918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.012049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.012060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.012185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.012196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.012334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.012345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.012412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.012422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.012542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.012552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.012614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.012624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.012681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.012691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.012816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.012826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.012959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.012969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.013049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.013059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.013182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.013193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.013267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.013287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.013371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.013393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.013473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.013494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.013647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.013659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.566 qpair failed and we were unable to recover it. 00:36:13.566 [2024-12-14 16:49:43.013790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.566 [2024-12-14 16:49:43.013800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.013930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.013941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.014069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.014080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.014150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.014160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.014228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.014238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.014304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.014314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.014440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.014450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.014592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.014603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.014672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.014682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.014749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.014761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.014830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.014840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.014895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.014905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.015041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.015051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.015119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.015129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.015256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.015266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.015336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.015346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.015531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.015542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.015613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.015623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.015748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.015758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.015842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.015853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.015924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.015934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.016005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.016015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.016151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.016161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.016233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.016243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.016313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.016323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.016392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.016402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.016471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.016480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.016550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.016563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.016709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.016719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.016849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.016859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.016980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.016991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.017115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.017127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.017199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.017209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.017350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.017363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.017427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.017440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.017515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.017528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.017706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.017725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.017900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.017918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.018008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.018025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.567 qpair failed and we were unable to recover it. 00:36:13.567 [2024-12-14 16:49:43.018107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.567 [2024-12-14 16:49:43.018121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.018267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.018281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.018434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.018448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.018521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.018534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.018609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.018622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.018761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.018775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.018849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.018861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.018931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.018944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.019173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.019187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.019342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.019355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.019482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.019503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.019590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.019604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.019678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.019691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.019853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.019867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.019993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.020026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.020215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.020246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.020428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.020460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.020648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.020662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.020726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.020739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.020796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.020809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.020938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.020952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.021015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.021027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.021164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.021177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.021310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.021324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.021394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.021407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.021544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.021566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.021721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.021735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.021813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.021825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.021906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.021920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.022053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.022067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.022135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.022147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.022283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.022297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.022425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.022439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.022498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.022511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.022690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.022704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.022789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.022802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.022865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.022878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.022940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.022955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.023102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.568 [2024-12-14 16:49:43.023116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.568 qpair failed and we were unable to recover it. 00:36:13.568 [2024-12-14 16:49:43.023180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.023193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.023281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.023295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.023371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.023384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.023519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.023532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.023618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.023631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.023697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.023710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.023788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.023801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.023877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.023890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.023985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.023999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.024062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.024075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.024269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.024283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.024414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.024428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.024492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.024505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.024646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.024660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.024752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.024766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.024837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.024850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.024930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.024944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.025078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.025092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.025159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.025172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.025249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.025262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.025328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.025341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.025486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.025500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.025723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.025737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.025803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.025816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.025945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.025959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.026026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.026040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.026101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.026114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.026242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.026256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.026336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.026352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.026481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.026495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.026628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.026643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.026786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.026799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.026929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.026943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.569 [2024-12-14 16:49:43.027020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.569 [2024-12-14 16:49:43.027033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.569 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.027162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.027176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.027238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.027251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.027312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.027325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.027457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.027471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.027620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.027642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.027885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.027917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.028099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.028130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.028302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.028333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.028523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.028554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.028681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.028713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.028884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.028915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.029092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.029123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.029225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.029256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.029419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.029451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.029566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.029598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.029788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.029805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.030049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.030067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.030212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.030265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.030425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.030443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.030583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.030602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.030678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.030694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.030792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.030810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.031026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.031058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.031172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.031204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.031401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.031433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.031666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.031685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.031823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.031841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.031931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.031949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.032094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.032111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.032181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.032199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.032271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.032289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.032500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.032517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.032602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.032621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.032716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.032733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.032882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.032900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.032994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.033012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.033110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.033127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.033276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.033294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.570 qpair failed and we were unable to recover it. 00:36:13.570 [2024-12-14 16:49:43.033437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.570 [2024-12-14 16:49:43.033455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.033526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.033544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.033656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.033684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.033785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.033804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.033887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.033905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.034055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.034073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.034146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.034167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.034320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.034338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.034409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.034427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.034581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.034601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.034674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.034692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.034778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.034796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.034934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.034952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.035024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.035042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.035180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.035198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.035341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.035359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.035576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.035609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.035724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.035756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.035988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.036020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.036120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.036152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.036326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.036357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.036470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.036487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.036626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.036644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.036731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.036749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.036918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.036936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.037074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.037092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.037166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.037184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.037322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.037339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.037576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.037609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.037776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.037809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.037992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.038023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.038150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.038181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.038291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.038314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.038467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.038489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.038656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.038681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.038763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.038784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.038952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.038974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.039139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.039161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.039351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.039383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.571 [2024-12-14 16:49:43.039567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.571 [2024-12-14 16:49:43.039599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.571 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.039716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.039746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.039926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.039957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.040123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.040154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.040268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.040298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.040472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.040503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.040638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.040670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.040930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.040957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.041057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.041079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.041177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.041200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.041363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.041385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.041465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.041487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.041639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.041662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.041759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.041781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.041963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.041985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.042085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.042107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.042258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.042280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.042392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.042414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.042573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.042597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.042700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.042723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.042811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.042834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.042950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.042973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.043057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.043080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.043173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.043195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.043307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.043329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.043596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.043621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.043720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.043743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.043892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.043915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.043996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.044018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.044108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.044131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.044305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.044328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.044420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.044442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.044525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.044548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.044710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.044734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.044822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.044845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.044991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.045014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.045105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.045128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.045279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.045301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.045382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.572 [2024-12-14 16:49:43.045404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.572 qpair failed and we were unable to recover it. 00:36:13.572 [2024-12-14 16:49:43.045504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.045527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.045764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.045787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.045880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.045903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.046006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.046029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.046201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.046223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.046302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.046325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.046570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.046594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.046770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.046793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.046881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.046909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.047127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.047150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.047298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.047321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.047415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.047438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.047541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.047580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.047683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.047706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.047800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.047823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.047914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.047937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.048107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.048130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.048393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.048425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.048543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.048591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.048767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.048799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.048906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.048938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.049106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.049138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.049318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.049350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.049586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.049620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.049732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.049764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.049939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.049965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.050212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.050238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.050346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.050373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.050476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.050502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.050672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.050699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.050863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.050896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.051065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.051097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.051338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.051370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.051494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.051521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.573 [2024-12-14 16:49:43.051624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.573 [2024-12-14 16:49:43.051651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.573 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.051909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.051936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.052216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.052247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.052411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.052442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.052554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.052595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.052766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.052806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.052918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.052945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.053052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.053078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.053243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.053270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.053499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.053525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.053706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.053733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.053823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.053849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.053941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.053967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.054160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.054192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.054360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.054397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.054604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.054637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.054805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.054833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.054989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.055031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.055147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.055178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.055439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.055471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.055645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.055678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.055799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.055825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.055998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.056024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.056131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.056158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.056318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.056345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.056575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.056602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.056710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.056736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.056926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.056953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.057146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.057177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.057292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.057324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.057433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.057465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.057581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.057615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.057719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.057759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.057859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.057885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.057993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.058020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.058205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.058232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.058397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.058430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.058544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.058590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.058688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.058720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.058929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.574 [2024-12-14 16:49:43.058961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.574 qpair failed and we were unable to recover it. 00:36:13.574 [2024-12-14 16:49:43.059076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.059107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.059299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.059332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.059443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.059475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.059607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.059640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.059818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.059849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.060082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.060114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.060229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.060261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.060430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.060461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.060629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.060661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.060898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.060930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.061043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.061075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.061247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.061278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.061378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.061410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.061620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.061653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.061768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.061806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.061974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.062005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.062273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.062305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.062422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.062453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.062628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.062660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.062827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.062859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.063027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.063058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.063169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.063201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.063313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.063345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.063513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.063544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.063746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.063779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.063900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.063932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.064114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.064146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.064245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.064277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.064397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.064429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.064635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.064669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.064928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.064960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.065163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.065195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.065368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.065399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.065576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.065610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.065749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.065780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.065893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.065924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.066171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.066202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.066324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.066354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.066520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.575 [2024-12-14 16:49:43.066550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.575 qpair failed and we were unable to recover it. 00:36:13.575 [2024-12-14 16:49:43.066695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.066727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.066839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.066870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.066986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.067018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.067209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.067241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.067478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.067509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.067689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.067721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.067850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.067881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.068070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.068100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.068298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.068329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.068440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.068471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.068654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.068687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.068955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.068986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.069100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.069131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.069301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.069332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.069457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.069488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.069743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.069781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.069950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.069980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.070083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.070113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.070217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.070248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.070414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.070444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.070678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.070710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.070894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.070925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.071201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.071232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.071335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.071365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.071584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.071616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.071743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.071775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.071969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.072000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.072105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.072137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.072329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.072360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.072568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.072602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.072800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.072831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.073000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.073030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.073199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.073230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.073411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.073442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.073681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.073714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.073820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.073851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.073966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.073998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.074237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.074268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.074439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.576 [2024-12-14 16:49:43.074470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.576 qpair failed and we were unable to recover it. 00:36:13.576 [2024-12-14 16:49:43.074730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.074762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.074880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.074911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.075123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.075153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.075361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.075393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.075586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.075618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.075802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.075833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.075946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.075977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.076169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.076199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.076388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.076420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.076601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.076632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.076827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.076858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.076985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.077016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.077136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.077166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.077361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.077392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.077585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.077618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.077853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.077885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.077986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.078022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.078221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.078252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.078384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.078415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.078651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.078683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.078800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.078831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.078944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.078975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.079145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.079175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.079447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.079479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.079670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.079703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.079811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.079843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.079946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.079976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.080102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.080133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.080316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.080347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.080457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.080488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.080622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.080654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.080858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.080889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.081012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.081043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.081216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.081247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.081350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.081381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.081479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.081510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.081696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.081728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.081896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.081927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.082050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.082080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.082182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.082213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.082396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.082426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.082532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.082572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.082691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.082722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.082907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.082938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.083142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.083173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.083274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.083304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.083491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.083521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.083654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.083686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.083870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.083902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.084094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.084125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.084312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.084343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.084548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.084592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.084697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.084727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.084895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.084926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.085106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.085137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.085252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.085283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.085405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.085441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.085549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.085594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.085710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.085741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.085850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.577 [2024-12-14 16:49:43.085881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.577 qpair failed and we were unable to recover it. 00:36:13.577 [2024-12-14 16:49:43.086062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.086093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.086259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.086290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.086411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.086441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.086538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.086579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.086840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.086871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.087066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.087098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.087205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.087236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.087476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.087507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.087718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.087752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.087865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.087896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.088021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.088053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.088305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.088337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.088510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.088541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.088727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.088760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.088938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.088970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.089079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.089110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.089222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.089252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.089353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.089384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.089515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.089546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.089747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.089780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.089896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.089928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.090098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.090129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.090240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.090271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.090395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.090427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.090621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.090654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.090823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.090855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.090986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.091017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.091276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.091307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.091478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.091510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.091634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.091667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.091872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.091904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.092014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.092045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.092147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.092178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.092357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.092389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.092657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.092690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.092809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.092840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.093104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.093141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.093316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.093347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.093531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.093571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.093763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.093795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.094046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.094078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.094328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.094358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.094474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.094506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.094623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.094656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.094869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.094901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.095135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.095165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.095342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.095374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.095602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.095635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.095805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.095837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.096035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.096066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.096242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.096272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.096464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.096496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.096684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.096717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.096831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.096862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.097064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.097096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.097214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.097245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.097354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.097386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.097487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.097518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.097766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.097799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.097918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.097949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.098119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.098150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.098266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.098298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.098501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.098531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.098658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.098692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.098864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.098896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.099132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.099164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.099279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.099312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.099434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.099465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.099656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.099690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.099882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.099914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.100027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.100059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.100231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.100263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.100448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.100479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.100599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.100632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.100803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.100834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.101021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.101052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.101151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.101187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.101426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.101457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.101643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.101676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.578 qpair failed and we were unable to recover it. 00:36:13.578 [2024-12-14 16:49:43.101793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.578 [2024-12-14 16:49:43.101825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.101933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.101965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.102076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.102107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.102229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.102260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.102453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.102485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.102615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.102647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.102765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.102796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.102989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.103020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.103256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.103287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.103462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.103494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.103664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.103697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.103833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.103865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.104066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.104098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.104269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.104300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.104415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.104446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.104684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.104717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.104898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.104930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.105119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.105149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.105267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.105298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.105412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.105444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.105609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.105643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.105759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.105791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.105898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.105930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.106112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.106143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.106466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.106537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.106684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.106721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.106837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.106869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.107074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.107107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.107375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.107405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.107520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.107552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.107765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.107796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.108043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.108074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.108185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.108215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.108384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.108416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.108533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.108574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.108747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.108779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.108955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.108985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.109169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.109209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.109403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.109433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.109549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.109605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.109710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.109741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.109908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.109939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.110120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.110152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.110269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.110301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.110470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.110501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.110748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.110781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.110983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.111015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.111115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.111147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.111327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.111358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.111524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.111570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.111790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.111822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.112069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.112101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.112231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.112262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.112380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.112411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.112522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.112552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.112680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.112712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.112957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.112989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.113104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.113135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.113234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.113265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.113380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.113412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.113673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.113706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.113896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.113928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.114032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.114063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.114250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.114281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.114517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.114554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.114687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.114719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.114834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.114865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.115049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.115081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.115197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.115228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.115424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.115455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.115573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.115605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.115710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.115741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.115981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.116011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.116190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.116221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.116465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.116496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.116690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.116722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.116840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.579 [2024-12-14 16:49:43.116871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.579 qpair failed and we were unable to recover it. 00:36:13.579 [2024-12-14 16:49:43.116989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.117020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.117139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.117171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.117350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.117382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.117578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.117610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.117729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.117761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.118007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.118037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.118223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.118254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.118385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.118416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.118596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.118629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.118886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.118917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.119042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.119073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.119263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.119294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.119464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.119494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.119620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.119654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.119827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.119858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.120051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.120082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.120276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.120307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.120410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.120441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.120630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.120662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.120904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.120936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.121115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.121145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.121258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.121289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.121466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.121497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.121670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.121702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.121871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.121901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.122089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.122121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.122308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.122338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.122523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.122570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.122830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.122861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.123124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.123155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.123266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.123297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.123421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.123453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.123620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.123652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.123790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.123821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.124000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.124032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.124161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.124192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.124372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.124403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.124575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.124608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.124798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.124829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.124995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.125026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.125128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.125158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.125368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.125400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.125636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.125669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.125859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.125890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.126018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.126050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.126290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.126322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.126433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.126464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.126573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.126605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.126788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.126819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.127072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.127102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.127287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.127319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.127495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.127527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.127804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.127873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.128112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.128182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.128337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.128374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.128584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.128619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.128789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.128821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.128942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.128973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.129139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.129170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.129371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.129403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.129576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.129607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.129839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.129871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.130076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.130108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.130222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.130253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.130366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.130397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.130667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.130700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.130868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.130900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.131019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.131058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.131248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.131279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.131470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.131501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.131687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.131720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.131905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.131937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.132127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.132158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.132405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.132436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.132576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.132609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.132812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.132844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.133024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.133055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.133170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.133201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.133393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.133424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.133594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.133627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.133908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.133939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.134062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.134093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.134203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.580 [2024-12-14 16:49:43.134235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.580 qpair failed and we were unable to recover it. 00:36:13.580 [2024-12-14 16:49:43.134343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.134374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.134543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.134591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.134774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.134805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.135040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.135071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.135251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.135282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.135454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.135486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.135603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.135635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.135751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.135782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.135981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.136013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.136127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.136158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.136350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.136381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.136554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.136598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.136820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.136850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.137033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.137065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.137176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.137207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.137325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.137356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.137485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.137516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.137639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.137671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.137841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.137872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.138060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.138091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.138383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.138414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.138515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.138546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.138659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.138691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.138810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.138841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.139022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.139059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.139269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.139301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.139475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.139506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.139631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.139664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.139781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.139813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.139937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.139968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.140201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.140232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.140414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.140445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.140715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.140747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.140940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.140971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.141086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.141118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.141224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.141255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.141424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.141455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.141645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.141677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.141815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.141847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.142038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.142070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.142186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.142217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.142330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.142360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.142628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.142660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.142825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.142856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.143046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.143077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.143276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.143307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.143480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.143511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.143634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.143667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.143864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.143895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.144078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.144109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.144316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.144347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.144468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.144500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.144712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.144745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.144846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.144878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.145062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.145093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.145278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.145310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.145504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.145536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.145653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.145684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.145799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.145830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.145999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.146031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.146201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.146232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.146346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.146378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.146610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.146643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.146832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.146863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.147069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.147107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.147216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.147247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.147347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.147379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.147579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.147613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.147725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.147756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.147988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.148019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.148186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.148216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.148451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.148482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.148594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.148626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.148809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.148840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.149007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.149037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.149226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.149256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.149370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.149400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.149593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.149625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.149762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.149793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.149940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.149970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.581 [2024-12-14 16:49:43.150151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.581 [2024-12-14 16:49:43.150182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.581 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.150354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.150386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.150599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.150630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.150817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.150849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.151087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.151118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.151224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.151254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.151373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.151404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.151591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.151622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.151792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.151823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.151990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.152021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.152230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.152261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.152370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.152402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.152631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.152663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.152791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.152822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.152940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.152970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.153137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.153168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.153359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.153390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.153573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.153606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.153718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.153750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.153860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.153891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.154001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.154032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.154135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.154167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.154332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.154363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.154482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.154513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.154638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.154676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.154911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.154942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.155070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.155122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.155288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.155313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.155467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.155489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.155701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.155736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.155927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.155958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.156068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.156100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.156268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.156299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.156504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.156535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.156738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.156769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.156982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.157013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.157132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.157164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.157279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.157311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.157445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.157476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.157583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.157615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.157805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.157836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.157958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.157979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.158074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.158094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.158241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.158262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.158431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.158461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.158646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.158679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.158865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.158898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.159020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.159041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.159215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.159236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.159407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.159429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.159597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.159620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.159700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.159729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.159825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.159846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.160078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.160110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.160244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.160276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.160395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.160427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.160626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.160658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.160839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.160870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.160989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.161019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.161185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.161216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.161387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.161418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.161600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.161633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.161894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.161925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.162129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.162161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.162337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.162368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.162573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.162608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.162777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.162807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.163041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.163071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.163253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.163283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.163394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.163424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.163606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.163637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.163755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.163785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.163935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.163966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.164154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.164185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.164366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.164391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.164490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.164512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.164604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.164626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.164720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.164742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.164912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.164934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.165099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.165140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.165258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.165289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.165411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.165442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.165574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.165607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.165789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.165820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.165988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.166019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.582 qpair failed and we were unable to recover it. 00:36:13.582 [2024-12-14 16:49:43.166124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.582 [2024-12-14 16:49:43.166156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.166335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.166367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.166553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.166606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.166739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.166771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.166960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.167001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.167177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.167199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.167275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.167295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.167564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.167600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.167720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.167751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.167940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.167971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.168185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.168217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.168329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.168360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.168478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.168509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.168700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.168725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.168952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.168973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.169169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.169191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.169362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.169383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.169651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.169682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.169852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.169883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.169986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.170017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.170319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.170350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.170525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.170575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.170764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.170796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.171033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.171064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.171260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.171292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.171390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.171422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.171627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.171660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.171773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.171807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.171906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.171928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.172025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.172046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.172147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.172168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.172311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.172333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.172572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.172593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.172680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.172699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.172801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.172826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.172931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.172952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.173037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.173057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.173226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.173248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.173338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.173358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.173436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.173456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.173640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.173661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.173759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.173780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.173928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.173950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.174119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.174140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.174249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.174270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.174350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.174370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.174452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.174472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.174628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.174651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.174741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.174762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.174945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.174966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.175114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.175135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.175218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.175239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.175319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.175339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.175449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.175470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.175631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.175654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.175839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.175860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.176080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.176101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.176207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.176228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.176391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.176412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.176629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.176651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.176802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.176823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.176932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.176957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.177118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.177139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.177224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.177244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.177321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.177341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.177489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.177510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.177678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.177700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.177808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.177830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.178079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.178100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.178195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.178217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.178379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.178400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.178495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.178516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.178715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.178738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.178897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.178918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.179064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.179085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.179257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.179279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.179371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.179392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.179568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.179591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.179682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.179703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.179955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.179976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.180160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.180181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.180342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.180372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.180571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.180604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.583 qpair failed and we were unable to recover it. 00:36:13.583 [2024-12-14 16:49:43.180846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.583 [2024-12-14 16:49:43.180877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.181053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.181075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.181231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.181252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.181430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.181451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.181625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.181649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.181821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.181843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.181952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.181973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.182159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.182180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.182286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.182307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.182410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.182431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.182581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.182603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.182774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.182796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.182897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.182919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.182997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.183017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.183202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.183223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.183441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.183472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.183673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.183706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.183808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.183839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.184017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.184048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.184151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.184188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.184302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.184332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.184472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.184503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.184628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.184660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.184763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.184794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.185048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.185079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.185200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.185232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.185431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.185461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.185583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.185616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.185748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.185780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.185947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.185968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.186114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.186153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.186259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.186290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.186481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.186512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.186773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.186807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.186910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.186931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.187174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.187195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.187350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.187372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.187452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.187474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.187644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.187667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.187824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.187846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.187945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.187966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.188182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.188204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.188442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.188463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.188611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.188633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.188731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.188752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.188921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.188942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.189052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.189078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.189295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.189317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.189463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.189484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.189569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.189591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.189758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.189779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.189924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.189945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.190128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.190150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.190304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.190325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.190418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.190439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.190528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.190549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.190653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.190675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.190822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.190843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.190993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.191033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.191205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.191236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.191406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.191438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.191601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.191634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.191926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.191956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.192089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.192120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.192311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.192333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.192507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.192539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.192684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.192715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.192821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.192852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.193046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.193077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.193328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.193359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.193471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.193503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.193619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.193651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.193819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.193850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.193966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.193998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.194123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.194144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.194293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.584 [2024-12-14 16:49:43.194314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.584 qpair failed and we were unable to recover it. 00:36:13.584 [2024-12-14 16:49:43.194481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.194502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.194654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.194676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.194763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.194783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.194874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.194896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.194991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.195012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.195127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.195147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.195299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.195321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.195479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.195510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.195625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.195657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.195830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.195862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.195965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.195996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.196158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.196190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.196285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.196306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.196395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.196418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.196631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.196654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.196736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.196756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.196902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.196923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.197066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.197087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.197265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.197296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.197411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.197442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.197635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.197667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.197836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.197867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.197982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.198003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.198162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.198183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.198268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.198289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.198390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.198411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.198500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.198522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.198685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.198707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.198869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.198891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.199072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.199094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.199267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.199288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.199375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.199397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.199560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.199583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.199763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.199784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.199866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.199887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.199971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.199991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.200092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.200113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.200276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.200297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.200480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.200501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.200598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.200621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.200771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.200792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.200872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.200892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.201064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.201085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.201177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.201198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.201434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.201455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.201698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.201720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.201808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.201830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.201931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.201951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.202120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.202141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.202219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.202239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.202387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.202408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.202489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.202508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.202621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.202644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.202794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.202816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.202894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.202914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.203000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.203022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.203215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.203237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.203316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.203336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.203418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.203440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.203529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.203551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.203656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.203679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.203829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.203851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.204016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.204038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.204146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.204167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.204260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.204282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.204365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.204386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.204564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.204586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.204742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.204763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.204927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.204948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.205038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.205059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.205224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.205246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.205415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.205436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.205627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.205649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.205824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.205845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.205959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.205980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.206135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.206173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.206437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.206468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.206595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.206628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.206797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.206828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.206933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.206971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.207232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.207253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.207336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.207357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.207573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.207594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.207777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.207798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.207965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.207987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.208203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.208234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.585 qpair failed and we were unable to recover it. 00:36:13.585 [2024-12-14 16:49:43.208401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.585 [2024-12-14 16:49:43.208432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.208614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.208646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.208815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.208846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.209033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.209064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.209245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.209267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.209436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.209467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.209665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.209698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.209884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.209916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.210083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.210104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.210195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.210216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.210323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.210344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.210489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.210510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.210597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.210618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.210703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.210724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.210896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.210918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.211072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.211103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.211273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.211304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.211501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.211533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.211809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.211840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.211960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.211991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.212167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.212199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.212380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.212402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.212578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.212609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.212712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.212744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.212855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.212886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.213052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.213083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.213202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.213224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.213309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.213330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.213476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.213496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.213666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.213688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.213868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.213889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.213974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.213995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.214140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.214161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.214312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.214333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.214488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.214509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.214607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.214629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.214722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.214744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.214855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.214877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.215025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.215046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.215126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.215147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.215234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.215255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.215349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.215370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.215516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.215537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.215708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.215730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.215825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.215847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.215926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.215947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.216095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.216135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.216320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.216353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.216471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.216503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.216700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.216732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.216910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.216931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.217029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.217051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.217137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.217158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.217316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.217337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.217436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.217458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.217542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.217570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.217650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.217671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.217770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.217790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.217979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.218000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.218080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.218101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.218247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.218268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.218353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.218378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.218561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.218583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.218774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.218796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.218903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.218925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.219013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.219035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.219135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.219156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.219373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.219394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.219477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.219498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.219591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.219614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.219763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.219784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.219930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.219951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.220105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.220126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.220241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.220263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.220434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.220455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.220626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.220648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.220730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.220752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.220844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.220865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.220976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.220998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.221162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.586 [2024-12-14 16:49:43.221183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.586 qpair failed and we were unable to recover it. 00:36:13.586 [2024-12-14 16:49:43.221297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.221319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.221403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.221425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.221512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.221534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.221635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.221658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.221758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.221780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.221878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.221900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.222070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.222092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.222186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.222208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.222357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.222379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.222480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.222502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.222607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.222630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.222807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.222829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.222913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.222935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.223118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.223139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.223308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.223330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.223432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.223452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.223554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.223583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.223676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.223697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.223804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.223824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.223994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.224015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.224175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.224207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.224311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.224342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.224441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.224479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.224650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.224684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.224815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.224856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.225019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.225040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.225225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.225257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.225426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.225457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.225661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.225693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.225814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.225845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.225962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.225983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.226144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.226166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.226361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.226382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.226481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.226503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.226673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.226705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.226800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.226822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.226919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.226941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.227047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.227069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.227176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.227198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.227369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.227390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.227607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.227630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.227720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.227741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.227993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.228025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.228140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.228172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.228279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.228310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.228428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.228459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.228628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.228661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.228774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.228805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.228980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.229002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.229150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.229179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.229270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.229290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.229445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.229466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.229680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.229703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.229805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.229826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.230007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.230029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.230119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.230141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.230310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.230331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.230550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.230626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.230792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.230814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.230963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.230984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.231160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.231191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.231383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.231415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.231581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.231614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.231729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.231761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.231875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.231906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.232073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.232104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.232315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.232337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.232418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.232438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.232517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.232538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.232636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.232658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.232733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.232754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.232927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.232948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.233093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.233115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.233188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.233210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.233299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.233320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.233550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.233579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.587 [2024-12-14 16:49:43.233749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.587 [2024-12-14 16:49:43.233771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.587 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.233880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.233901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.233998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.234019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.234110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.234131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.234286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.234307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.234417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.234438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.234520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.234541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.234668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.234690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.234840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.234862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.234974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.234996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.235085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.235106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.235347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.235379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.235572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.235606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.235730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.235760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.235936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.235973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.236225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.236267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.236447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.236469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.236582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.236605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.236777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.236798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.236880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.236900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.236991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.237013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.237201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.237223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.237355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.237386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.237575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.237608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.237776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.237808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.237922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.237954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.238139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.238171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.238374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.238406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.238526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.238590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.238829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.238861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.238972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.239004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.239309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.239340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.239446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.239476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.239762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.239795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.239912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.239944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.240043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.240064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.240208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.240229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.240386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.240409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.240652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.240673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.240766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.240787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.240934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.240970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.241075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.241111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.241218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.241249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.241367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.241399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.241513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.241543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.241741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.241776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.242040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.242071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.242190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.242211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.242293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.242313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.242426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.242447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.242547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.242575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.242672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.242694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.242839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.242860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.242950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.242970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.243117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.243139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.243225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.243246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.243407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.243429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.243667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.243690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.243792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.243812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.243960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.243981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.244074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.244095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.244249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.244269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.244350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.244370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.244527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.244549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.244666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.244687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.244856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.244877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.244984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.245004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.245083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.245103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.245185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.245205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.245390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.245412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.245575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.245599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.245694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.245714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.245903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.245925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.246084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.246105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.246199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.246220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.246306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.246325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.246495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.246516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.246613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.246634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.246723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.246743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.588 qpair failed and we were unable to recover it. 00:36:13.588 [2024-12-14 16:49:43.246918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.588 [2024-12-14 16:49:43.246939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.247100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.247121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.247279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.247300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.247396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.247420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.247570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.247592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.247680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.247700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.247789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.247809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.247959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.247981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.248075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.248095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.248185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.248205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.248306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.248326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.248422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.248442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.248632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.248655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.248751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.248771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.248858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.248878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.249024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.249045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.249198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.249220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.249377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.249398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.249480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.249501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.249584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.249605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.249753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.249774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.249860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.249881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.250021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.250042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.250188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.250210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.250361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.250382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.250470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.250490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.250573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.250595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.250735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.250756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.250900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.250920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.251081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.251102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.251255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.251280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.251377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.251399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.251479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.251501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.251586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.251608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.251756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.251778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.251923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.251945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.252090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.252111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.252268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.252307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.252438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.252469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.252640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.252672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.252794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.252824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.252997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.253028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.253194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.253225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.253348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.253379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.253553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.253598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.253791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.253822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.253952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.253983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.254091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.254123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.254290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.254321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.254489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.254511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.254661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.254702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.254879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.254911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.255035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.255066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.255326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.255357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.255541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.255581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.255772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.255803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.255936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.255967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.256078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.256099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.256198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.256219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.256379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.256400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.256487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.256509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.256616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.256638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.256719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.256740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.256838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.256859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.257024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.257045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.257129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.257150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.257333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.257355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.257508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.257529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.257720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.257742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.257892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.257913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.258004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.258024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.258177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.258202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.258380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.258401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.258484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.258505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.258604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.258630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.258712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.258734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.258886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.258907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.259003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.259024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.259103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.259123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.259222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.259244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.259322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.259344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.259535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.259562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.259651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.259672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.259839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.259860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.260030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.260051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.260145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.260167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.260324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.260346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.260438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.260459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.260572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.260601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.260760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.260781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.260867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.260888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.261033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.261054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.261268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.261289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.261386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.589 [2024-12-14 16:49:43.261407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.589 qpair failed and we were unable to recover it. 00:36:13.589 [2024-12-14 16:49:43.261560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.261582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.261693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.261714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.261863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.261885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.262029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.262051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.262133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.262153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.262321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.262343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.262446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.262468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.262688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.262710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.262875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.262897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.262987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.263008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.263203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.263224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.263306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.263327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.263408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.263430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.263665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.263686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.263780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.263801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.263880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.263902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.264074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.264096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.264241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.264262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.264413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.264434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.264529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.264550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.264732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.264753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.264902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.264923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.265097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.265138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.265312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.265343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.265520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.265551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.265662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.265694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.265866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.265897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.266006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.266037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.266204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.266235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.266475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.266497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.266591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.266614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.266705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.266726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.266810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.266831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.266975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.266997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.267072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.267092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.267189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.267210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.267303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.267324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.267468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.267489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.267580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.267602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.267755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.267776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.267923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.267944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.268151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.268173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.268252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.268273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.268352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.268373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.268517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.268538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.268630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.268656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.268730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.268751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.268834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.268854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.268949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.268970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.269082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.269103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.269254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.269275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.269358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.269379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.269458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.269479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.269629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.269651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.269736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.269757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.269847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.269869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.269958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.269979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.270141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.270162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.270309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.270331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.270479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.270501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.270739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.270773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.271038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.271069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.271238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.271270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.271440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.271472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.271657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.271691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.271814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.271845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.271960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.271982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.272065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.272085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.272229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.272250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.272409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.272430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.272511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.272532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.272626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.272648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.272735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.272757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.272859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.272880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.272960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.272980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.273062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.273082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.273189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.273211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.273303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.273324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.590 qpair failed and we were unable to recover it. 00:36:13.590 [2024-12-14 16:49:43.273407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.590 [2024-12-14 16:49:43.273428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.273591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.273613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.273694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.273716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.273863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.273884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.274033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.274054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.274151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.274173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.274323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.274344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.274491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.274512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.274609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.274635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.274733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.274755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.274930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.274950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.275057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.275078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.275232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.275253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.275351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.275372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.275456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.275477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.275659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.275681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.275762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.275784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.275882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.275904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.276144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.276176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.276361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.276392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.276507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.276539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.276672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.276706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.276813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.276844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.276954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.276985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.277091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.277122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.277233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.277254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.277346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.277367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.277458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.277480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.277628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.277650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.277796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.277818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.277977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.278000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.278095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.278116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.278267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.278289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.278372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.278394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.278476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.278497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.278588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.278614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.278696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.278717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.278808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.278830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.279005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.279027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.279186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.279207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.279371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.279393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.279478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.279500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.279586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.279608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.279691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.279713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.279816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.279838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.279935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.279957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.280102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.280125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.280219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.280241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.280421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.280443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.280532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.280553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.280727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.280749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.280840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.280861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.281049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.281071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.281157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.281179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.281283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.281304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.281470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.281491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.281579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.281601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.281756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.281777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.281871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.281892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.282054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.282075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.282232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.282254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.282351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.282372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.282517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.282538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.282715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.282737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.282831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.282852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.282928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.282950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.283114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.283136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.283227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.283248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.283326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.283347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.283426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.283446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.283694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.283717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.283799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.283821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.284005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.284026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.284123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.284144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.284225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.284246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.284388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.284410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.284550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.284581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.284735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.284756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.284843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.284864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.285112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.285182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.285391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.285427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.285629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.285664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.285767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.285798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.286008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.286040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.286228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.286259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.286367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.286398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.286510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.591 [2024-12-14 16:49:43.286534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.591 qpair failed and we were unable to recover it. 00:36:13.591 [2024-12-14 16:49:43.286688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.286711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.286788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.286809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.286959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.286980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.287075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.287097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.287260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.287281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.287377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.287398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.287593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.287616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.287712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.287733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.287839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.287860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.287939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.287960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.288055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.288076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.288237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.288259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.288433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.288455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.288720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.288760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.288862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.288893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.288996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.289028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.289146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.289182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.289372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.289393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.289623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.289656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.289892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.289924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.290185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.290206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.290286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.290306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.290402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.290423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.290508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.290529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.290687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.290708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.290884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.290916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.291018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.291049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.291251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.291283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.291446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.291468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.291577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.291600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.291756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.291778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.291941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.291973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.292141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.292172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.292290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.292321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.292518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.292549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.292749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.292782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.292892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.292923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.293025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.293055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.293158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.293189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.293467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.293498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.293614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.293646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.293885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.293916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.294042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.294073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.294304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.294325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.294487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.294509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.294668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.294690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.294926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.294947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.295099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.295121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.295216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.295237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.295404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.295425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.295598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.295621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.295854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.295887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.295993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.296024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.296202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.296233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.296337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.296378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.296473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.296494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.296595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.296617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.296712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.296734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.296982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.297003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.297156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.297178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.297397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.297418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.297502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.297522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.297616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.297638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.297726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.297749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.297899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.297920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.298020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.298041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.298133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.298155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.298369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.298390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.298473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.298494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.298661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.298685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.298848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.298870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.299037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.299059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.592 qpair failed and we were unable to recover it. 00:36:13.592 [2024-12-14 16:49:43.299145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.592 [2024-12-14 16:49:43.299167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.299328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.299349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.299430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.299451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.299551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.299580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.299745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.299767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.299866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.299887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.299971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.299992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.300162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.300183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.300420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.300441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.300655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.300678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.300773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.300794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.300878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.300899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.300984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.301008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.301198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.301219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.301312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.301339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.301445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.301466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.301570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.301592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.301749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.301771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.301862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.301883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.301974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.301995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.302081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.302102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.302184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.302205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.302304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.302324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.302492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.302513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.302609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.302631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.302778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.302800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.302953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.302974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.303081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.303102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.303249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.303270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.303429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.303450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.303598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.303620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.593 [2024-12-14 16:49:43.303787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.593 [2024-12-14 16:49:43.303808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.593 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.303949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.303970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.304055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.304075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.304172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.304194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.304274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.304296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.304376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.304396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.304484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.304505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.304586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.304609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.304697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.304718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.304807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.304828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.304975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.304996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.305082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.305103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.305179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.305199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.305381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.305403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.305545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.305575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.305721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.305742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.305904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.305925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.306092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.306113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.306194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.306215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.306310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.306332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.306480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.306501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.306581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.306603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.306760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.306782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.306878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.306899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.306998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.307018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.307168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.307189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.307416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.307447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.307574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.307607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.307798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.307829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.307946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.307977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.308158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.308188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.308301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.308331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.308499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.308530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.308655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.308688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.308869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.308900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.309085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.309115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.309245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.309282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.309438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.309459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.594 [2024-12-14 16:49:43.309615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.594 [2024-12-14 16:49:43.309637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.594 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.309730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.309751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.309843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.309864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.310024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.310045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.310125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.310146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.310230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.310251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.310396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.310417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.310633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.310655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.310834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.310856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.311030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.311061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.311266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.311297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.311402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.311439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.311615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.311637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.311732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.311753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.311841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.311863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.312079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.312101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.312191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.312212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.312301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.312322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.312417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.312437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.312598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.312620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.312716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.312737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.312883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.312904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.312997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.313018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.313209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.313230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.313404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.313425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.313587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.313609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.313689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.313710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.313813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.313834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.314017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.314038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.314142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.314163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.314321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.314352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.314544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.314618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.314807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.314839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.314942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.314983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.315143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.315165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.315395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.315426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.315597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.315630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.315891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.315922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.316036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.316067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.316254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.595 [2024-12-14 16:49:43.316285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.595 qpair failed and we were unable to recover it. 00:36:13.595 [2024-12-14 16:49:43.316402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.316423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.316639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.316661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.316765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.316787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.316870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.316891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.316997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.317018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.317102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.317123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.317207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.317228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.317378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.317399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.317482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.317503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.317648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.317670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.317767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.317788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.318005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.318027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.318108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.318132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.318325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.318346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.318505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.318526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.318621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.318643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.318731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.318752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.318830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.318851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.318947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.318968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.319050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.319071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.319169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.319190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.319286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.319307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.319389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.319410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.319581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.319604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.319757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.319779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.319862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.319884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.319988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.320010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.320221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.320258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.320374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.320406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.320634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.320666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.320841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.320874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.320985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.596 [2024-12-14 16:49:43.321018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.596 qpair failed and we were unable to recover it. 00:36:13.596 [2024-12-14 16:49:43.321194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.321225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.321395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.321416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.321508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.321530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.321623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.321646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.321821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.321843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.321930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.321951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.322040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.322060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.322245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.322271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.322424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.322456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.322605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.322639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.322821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.322853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.322963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.322995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.323113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.323135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.323225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.323245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.323333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.323354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.323499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.323521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.323606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.323627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.323780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.323801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.323892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.323912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.324081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.324103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.324348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.324369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.324461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.324482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.324628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.324649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.324739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.324759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.324904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.324925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.325071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.325092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.325173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.325192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.325355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.325377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.325469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.325490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.325578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.325598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.325753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.325775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.325863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.325882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.325960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.325979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.326167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.326188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.326401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.326423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.326597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.326620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.326709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.326729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.326894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.326917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.327069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.327090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.597 [2024-12-14 16:49:43.327241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.597 [2024-12-14 16:49:43.327273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.597 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.327381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.327412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.327623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.327656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.327772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.327804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.327906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.327938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.328145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.328176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.328407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.328429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.328694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.328716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.328816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.328838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.329008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.329033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.329131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.329152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.329298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.329320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.329536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.329573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.329666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.329686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.329836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.329857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.330048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.330070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.330164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.330183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.330365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.330386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.330539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.330569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.330721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.330743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.330891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.330913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.330993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.331014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.331121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.331141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.331315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.331337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.331504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.331526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.331691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.331714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.331903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.331924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.332076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.332097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.332256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.332278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.332439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.332459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.332606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.332631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.332780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.332801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.332910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.332932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.333086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.333108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.333200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.333220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.333364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.333386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.333467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.333505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.333591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.333614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.333696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.333719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.333810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.598 [2024-12-14 16:49:43.333832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.598 qpair failed and we were unable to recover it. 00:36:13.598 [2024-12-14 16:49:43.334046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.334068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.334235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.334257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.334401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.334422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.334595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.334619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.334706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.334727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.334815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.334837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.334984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.335007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.335101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.335122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.335292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.335314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.335394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.335415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.335510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.335531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.335637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.335660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.335770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.335791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.335979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.336001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.336095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.336116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.336202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.336223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.336318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.336339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.336488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.336509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.336612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.336635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.336723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.336744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.336843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.336864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.336950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.336971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.337075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.337096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.337243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.337265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.337418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.337440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.337585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.337608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.337699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.337720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.337869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.337890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.338064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.338086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.338179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.338200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.338353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.338375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.338466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.338488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.338585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.338608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.338688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.338709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.338860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.338882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.338978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.338999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.339099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.339121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.339206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.339231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.339323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.339345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.599 qpair failed and we were unable to recover it. 00:36:13.599 [2024-12-14 16:49:43.339490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.599 [2024-12-14 16:49:43.339512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.339666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.339689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.339779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.339800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.339950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.339972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.340118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.340140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.340303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.340326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.340572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.340605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.340839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.340871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.340976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.341007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.341171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.341193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.341290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.341311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.341521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.341543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.341737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.341759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.341854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.341874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.342020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.342041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.342213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.342235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.342317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.342337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.342424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.342445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.342682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.342705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.342875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.342896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.342991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.343012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.343103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.343124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.343227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.343247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.343356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.343377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.343457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.343478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.343666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.343688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.343779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.343800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.343902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.343923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.344036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.344056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.344207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.344228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.344390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.344432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.344621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.344653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.344786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.344818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.345027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.345059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.345231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.345261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.345435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.345467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.345674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.345706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.345820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.345851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.346041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.346072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.346274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.600 [2024-12-14 16:49:43.346306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.600 qpair failed and we were unable to recover it. 00:36:13.600 [2024-12-14 16:49:43.346427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.346448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.346555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.346586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.346681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.346702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.346879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.346900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.347054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.347086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.347186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.347216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.347399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.347430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.347539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.347582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.347691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.347722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.347921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.347952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.348150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.348182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.348289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.348320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.348516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.348538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.348651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.348674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.348756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.348776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.348932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.348953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.349036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.349057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.349230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.349251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.349464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.349485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.349637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.349660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.349841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.349872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.350041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.350073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.350176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.350207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.350336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.350378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.350459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.350480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.350702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.350724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.350816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.350841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.351055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.351076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.351246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.351277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.351396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.351428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.351613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.351645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.351769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.351800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.351965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.351997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.352120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.601 [2024-12-14 16:49:43.352151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.601 qpair failed and we were unable to recover it. 00:36:13.601 [2024-12-14 16:49:43.352277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.352308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.352475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.352497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.352671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.352700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.352950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.352982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.353218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.353250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.353365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.353396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.353503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.353541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.353648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.353671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.353765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.353785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.353953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.353974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.354070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.354090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.354186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.354207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.354290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.354310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.354511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.354595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.354823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.354861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.354982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.355015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.355130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.355162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.355270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.355303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.355406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.355436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.355538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.355589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.355762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.355786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.355957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.355979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.356204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.356236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.356372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.356403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.356611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.356644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.356750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.356781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.356882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.356914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.357035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.357067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.357252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.357283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.357382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.357404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.357486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.357507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.357693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.357716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.357817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.357839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.357990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.358013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.358119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.358140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.358378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.358399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.358572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.358595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.358688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.358708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.602 [2024-12-14 16:49:43.358793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.602 [2024-12-14 16:49:43.358814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.602 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.358894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.358915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.359019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.359039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.359188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.359210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.359293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.359313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.359407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.359427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.359585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.359608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.359693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.359713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.359869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.359891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.360057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.360097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.360265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.360296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.360402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.360434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.360635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.360669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.360839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.360870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.360994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.361025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.361127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.361160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.361326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.361358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.361633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.361655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.361754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.361774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.361924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.361945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.362041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.362062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.362179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.362201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.362346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.362376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.362473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.362495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.362801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.362826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.362932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.362954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.363119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.363141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.363239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.363260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.363343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.363363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.363613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.363636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.363733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.363753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.363925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.363962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.364082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.364113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.364239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.364271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.364448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.364490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.364590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.364612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.364699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.364721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.364870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.364891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.365056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.365077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.365172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.603 [2024-12-14 16:49:43.365193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.603 qpair failed and we were unable to recover it. 00:36:13.603 [2024-12-14 16:49:43.365375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.365397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.365568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.365590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.365759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.365781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.365874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.365896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.366085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.366117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.366298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.366329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.366514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.366548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.366740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.366771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.366889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.366921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.367043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.367080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.367202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.367222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.367302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.367322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.367474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.367496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.367638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.367661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.367807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.367829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.368002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.368025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.368181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.368203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.368292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.368312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.368486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.368508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.368712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.368734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.368837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.368858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.368943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.368964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.369044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.369066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.369153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.369175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.369322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.369344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.369584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.369606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.369687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.369707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.369855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.369877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.369985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.370007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.370106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.370129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.370235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.370256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.370471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.370493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.370586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.370607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.370690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.370713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.370863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.370885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.370982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.371004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.371107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.371128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.371224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.371245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.371341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.371362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.604 qpair failed and we were unable to recover it. 00:36:13.604 [2024-12-14 16:49:43.371532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.604 [2024-12-14 16:49:43.371554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.371655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.371676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.371842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.371884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.372076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.372109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.372275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.372306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.372419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.372451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.372561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.372585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.372803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.372825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.372981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.373003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.373151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.373172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.373261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.373283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.373443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.373468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.373549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.373578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.373656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.373677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.373848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.373869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.373958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.373980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.374148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.374170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.374332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.374354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.374437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.374458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.374581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.374604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.374684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.374706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.374784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.374806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.374905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.374926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.375020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.375042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.375191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.375212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.375296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.375317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.375398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.375419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.375504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.375524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.375680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.375703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.375779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.375799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.375958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.375980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.376064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.376086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.376171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.376191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.376343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.376365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.376509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.376531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.605 [2024-12-14 16:49:43.376641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.605 [2024-12-14 16:49:43.376664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.605 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.376813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.376835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.377049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.377071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.377150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.377176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.377359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.377382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.377538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.377580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.377687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.377719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.377828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.377859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.377961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.377992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.378256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.378288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.378391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.378412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.378596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.378619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.378707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.378729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.378939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.378961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.379052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.379073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.379157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.379178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.379289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.379310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.379412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.379434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.379672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.379695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.379785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.379807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.379888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.379910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.380054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.380076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.380189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.380210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.380422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.380445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.380589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.380611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.380823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.380845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.381013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.381035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.381214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.381244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.381361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.381392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.381627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.381661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.381774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.381805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.381913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.381946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.382051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.382083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.382283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.382315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.382487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.382508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.382603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.382626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.382776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.382798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.382949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.382970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.383057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.383079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.383260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.606 [2024-12-14 16:49:43.383283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.606 qpair failed and we were unable to recover it. 00:36:13.606 [2024-12-14 16:49:43.383525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.383567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.383740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.383773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.384019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.384051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.384276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.384316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.384462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.384488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.384631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.384654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.384740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.384762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.384925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.384947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.385030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.385051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.385211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.385232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.385451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.385483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.385584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.385616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.385791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.385823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.385996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.386028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.386130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.386161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.386264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.386295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.386424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.386457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.386581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.386603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.386756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.386779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.386864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.386886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.386980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.387001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.387083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.387105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.387192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.387214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.387303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.387325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.387494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.387516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.387688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.387710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.387859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.387880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.387971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.387993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.388073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.388094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.388178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.388200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.388282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.388303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.388401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.388423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.388656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.388690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.388868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.388900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.389073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.389104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.389272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.389304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.389417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.389450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.389568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.389601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.389790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.389812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.607 [2024-12-14 16:49:43.389894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.607 [2024-12-14 16:49:43.389916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.607 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.390014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.390035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.390112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.390135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.390309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.390331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.390423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.390444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.390608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.390630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.390795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.390817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.391024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.391062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.391202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.391234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.391336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.391368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.391481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.391512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.391759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.391782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.391956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.391978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.392178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.392209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.392376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.392408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.392680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.392703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.392951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.392982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.393148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.393179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.393294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.393327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.393452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.393473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.393658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.393680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.393826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.393848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.393928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.393950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.394138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.394159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.394274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.394295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.394379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.394400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.394500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.394521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.394693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.394715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.394810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.394832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.394915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.394936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.395015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.395037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.395228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.395249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.395351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.395373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.395519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.395549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.395649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.395670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.395755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.395776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.395879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.395900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.396069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.396090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.396264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.396285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.396364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.608 [2024-12-14 16:49:43.396383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.608 qpair failed and we were unable to recover it. 00:36:13.608 [2024-12-14 16:49:43.396476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.396497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.396595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.396617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.396778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.396799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.397004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.397035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.397202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.397233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.397421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.397452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.397636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.397670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.397873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.397904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.398075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.398106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.398316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.398347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.398635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.398658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.398766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.398787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.398884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.398905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.398986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.399007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.399096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.399117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.399198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.399219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.399310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.399331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.399431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.399452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.399567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.399589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.399752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.399774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.399854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.399876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.400027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.400049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.400199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.400220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.400311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.400333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.400445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.400467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.400643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.400666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.400827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.400849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.400941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.400962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.401046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.401068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.401156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.401177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.401321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.401343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.401533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.401561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.401714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.401736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.401818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.401840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.401931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.401956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.402057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.402078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.402158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.402180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.402325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.402346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.402432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.402454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.609 [2024-12-14 16:49:43.402539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.609 [2024-12-14 16:49:43.402573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.609 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.402671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.402692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.402790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.402812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.402902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.402923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.403030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.403051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.403133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.403154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.403298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.403367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.403654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.403695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.403818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.403851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.403971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.404003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.404291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.404323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.404426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.404457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.404574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.404607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.404722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.404753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.404939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.404964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.405139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.405160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.405310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.405332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.405516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.405539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.405708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.405730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.405838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.405859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.406011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.406033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.406211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.406254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.406370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.406406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.406646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.406679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.406920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.406951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.407172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.407204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.407318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.407340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.407427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.407449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.407565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.407587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.407758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.407780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.407998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.408020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.408108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.408129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.408372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.408394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.408489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.408510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.408610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.408634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.610 qpair failed and we were unable to recover it. 00:36:13.610 [2024-12-14 16:49:43.408719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.610 [2024-12-14 16:49:43.408740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.408834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.408856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.409022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.409044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.409261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.409283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.409429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.409451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.409548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.409579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.409738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.409760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.409903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.409925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.410153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.410174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.410279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.410301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.410464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.410486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.410731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.410754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.410860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.410882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.411044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.411066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.411233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.411258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.411345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.411367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.411515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.411537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.411640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.411663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.411821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.411842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.411925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.411947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.412046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.412068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.412145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.412166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.412268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.412290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.412388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.412409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.412577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.412599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.412708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.412729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.412831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.412853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.412954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.412976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.413323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.413349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.413451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.413475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.413636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.413660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.413756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.413777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.413871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.413893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.413985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.414007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.414094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.414115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.414256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.414277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.414429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.414451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.414616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.414638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.414731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.414753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.611 [2024-12-14 16:49:43.414846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.611 [2024-12-14 16:49:43.414868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.611 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.415028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.415049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.415214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.415244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.415365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.415396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.415571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.415604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.415773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.415805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.415983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.416015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.416190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.416222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.416409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.416431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.416522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.416543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.416661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.416684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.416875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.416896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.417066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.417098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.417203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.417235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.417358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.417389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.417569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.417602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.417714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.417740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.417842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.417864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.418104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.418125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.418223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.418244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.418449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.418470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.418572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.418594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.418786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.418817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.418931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.418963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.419075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.419106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.419235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.419267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.419434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.419467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.419576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.419610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.419729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.419761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.419951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.419973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.420068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.420091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.420184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.420205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.420294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.420315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.420474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.420495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.420585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.420608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.420701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.420721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.420803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.420824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.420916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.420937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.421033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.421054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.421153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.421174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.612 qpair failed and we were unable to recover it. 00:36:13.612 [2024-12-14 16:49:43.421250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.612 [2024-12-14 16:49:43.421271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.423675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.423706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.423789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.423809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.423906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.423931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.424020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.424041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.424265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.424296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.424479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.424510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.424636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.424670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.424865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.424896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.425010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.425041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.425238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.425269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.425375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.425406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.425509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.425541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.425672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.425704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.425880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.425911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.426030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.426061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.426252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.426283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.426398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.426441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.426650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.426678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.426783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.426809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.426987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.427014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.427112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.427139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.427317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.427343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.427502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.427528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.427632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.427659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.427881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.427907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.427996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.428022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.428177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.428204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.428298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.428324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.428427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.428454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.428544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.428577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.428742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.428769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.429025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.429052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.429227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.429253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.429354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.429380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.429471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.429497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.429610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.429637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.429734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.429760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.429924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.429950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.430044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.613 [2024-12-14 16:49:43.430069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.613 qpair failed and we were unable to recover it. 00:36:13.613 [2024-12-14 16:49:43.430184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.430210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.430308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.430333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.430441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.430468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.430645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.430673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.430771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.430802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.430965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.430992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.431148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.431175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.431265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.431291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.431383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.431409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.431500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.431527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.431721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.431749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.431846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.431871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.431981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.432008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.432101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.432127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.432222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.432247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.432358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.432385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.432494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.432520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.432619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.432646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.432750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.432777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.432879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.432905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.433090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.433116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.433216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.433243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.433329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.433356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.433540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.433574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.433678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.433703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.433807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.433834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.434026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.434054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.434170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.434198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.434294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.434322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.434408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.434437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.434546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.434586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.434683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.434712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.434880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.434910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.435012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.435040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.435144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.435172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.614 qpair failed and we were unable to recover it. 00:36:13.614 [2024-12-14 16:49:43.435274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.614 [2024-12-14 16:49:43.435303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.435412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.435440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.435545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.435585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.435691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.435720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.435916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.435944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.436110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.436138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.436251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.436280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.436379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.436406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.436660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.436691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.436791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.436819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.436920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.436949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.437136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.437164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.437337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.437365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.437476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.437505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.437617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.437646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.437812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.437840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.437961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.437989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.438110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.438139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.438244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.438272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.438378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.438407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.438506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.438535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.438736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.438769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.438874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.438905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.439009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.439041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.439147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.439179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.439422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.439454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.439574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.439604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.439787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.439815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.439908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.439936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.440041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.440069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.440252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.440281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.440375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.440405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.440523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.440552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.440683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.440712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.440971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.440999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.441112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.441142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.441234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.441263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.441438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.441471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.441579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.441609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.441848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.615 [2024-12-14 16:49:43.441876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.615 qpair failed and we were unable to recover it. 00:36:13.615 [2024-12-14 16:49:43.441993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.442022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.442117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.442145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.442320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.442348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.442454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.442483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.442593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.442623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.442787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.442815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.442922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.442951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.443115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.443143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.443241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.443271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.443378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.443407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.443527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.443563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.443673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.443703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.443817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.443846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.443943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.443971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.444066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.444095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.444258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.444286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.444383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.444413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.444516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.444544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.444743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.444773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.444935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.444964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.445130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.445158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.445262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.445291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.445394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.445423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.445519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.445548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.445675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.445704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.445825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.445856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.445962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.445990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.446092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.446120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.446286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.446313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.446425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.446453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.446550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.446611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.446719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.446748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.446915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.446944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.447102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.447131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.447229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.447256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.447362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.447391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.447488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.447518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.447641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.616 [2024-12-14 16:49:43.447670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.616 qpair failed and we were unable to recover it. 00:36:13.616 [2024-12-14 16:49:43.447771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.447804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.447908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.447937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.448036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.448064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.448165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.448193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.448305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.448334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.448495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.448524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.448727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.448760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.448928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.448959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.449131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.449161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.449348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.449379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.449495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.449526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.449760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.449804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.450090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.450119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.450237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.450266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.450380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.450409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.450525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.450554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.450725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.450754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.450854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.450883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.450986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.451015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.451202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.451231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.451347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.451375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.451474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.451502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.451611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.451640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.451815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.451843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.452023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.452052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.452212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.452241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.452346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.452374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.452554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.452600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.452710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.452739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.452840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.452870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.453035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.453065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.453165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.453193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.453293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.453321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.453497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.453526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.453703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.453733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.453843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.453872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.454040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.454069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.454169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.454198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.454294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.454322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.617 [2024-12-14 16:49:43.454437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.617 [2024-12-14 16:49:43.454466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.617 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.454599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.454628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.454804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.454832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.455006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.455035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.455166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.455208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.455339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.455371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.455476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.455507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.455619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.455650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.455765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.455793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.455956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.455984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.456091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.456120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.456301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.456329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.456499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.456528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.456654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.456684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.456867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.456895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.456993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.457021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.457207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.457236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.457341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.457369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.457472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.457501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.457696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.457726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.457825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.457854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.458034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.458062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.458169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.458198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.458309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.458337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.458438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.458467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.458626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.458656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.458765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.458793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.458901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.458930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.459022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.459051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.459216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.459250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.459363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.459393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.459601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.459631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.459732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.459761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.459926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.459955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.460051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.460079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.460176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.460206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.460372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.460401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.460509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.460538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.460665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.460695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.460806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.460835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.618 qpair failed and we were unable to recover it. 00:36:13.618 [2024-12-14 16:49:43.460943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.618 [2024-12-14 16:49:43.460971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.461081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.461110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.461234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.461263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.461456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.461490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.461608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.461638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.461746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.461775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.461882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.461912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.462085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.462114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.462209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.462237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.462335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.462365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.462468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.462496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.462672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.462703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.462805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.462834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.462930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.462959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.463137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.463166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.463332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.463361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.463523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.463565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.463677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.463706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.463807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.463836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.463941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.463969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.464150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.464179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.464283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.464313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.464414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.464447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.464629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.464657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.464900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.464928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.465026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.465051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.465145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.465169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.465264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.465291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.465384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.465409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.465502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.465531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.465664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.465690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.465857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.465885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.466055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.466082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.466201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.619 [2024-12-14 16:49:43.466227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.619 qpair failed and we were unable to recover it. 00:36:13.619 [2024-12-14 16:49:43.466335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.466363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.466462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.466487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.466646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.466674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.466773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.466799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.466901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.466928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.467093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.467119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.467279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.467306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.467405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.467432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.467588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.467615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.467719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.467746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.467846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.467874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.468104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.468131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.468295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.468322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.468420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.468447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.468607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.468635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.468737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.468766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.468865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.468892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.468998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.469025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.469188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.469215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.469305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.469330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.469434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.469460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.469621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.469648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.469747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.469778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.469879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.469910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.470004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.470030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.470190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.470216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.470393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.470419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.470510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.470536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.470663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.470691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.470859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.470887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.470985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.471011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.471115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.471144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.471234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.471260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.471351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.471376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.471573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.471601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.471709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.471737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.471827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.471852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.472101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.472128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.472250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.472279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.472387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.620 [2024-12-14 16:49:43.472414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.620 qpair failed and we were unable to recover it. 00:36:13.620 [2024-12-14 16:49:43.472518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.472545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.472717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.472744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.472840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.472871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.473064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.473091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.473250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.473277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.473367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.473392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.473504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.473531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.473713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.473741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.473904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.473930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.474027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.474053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.474232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.474265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.474427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.474455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.474568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.474594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.474784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.474811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.474920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.474945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.475057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.475084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.475239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.475264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.475420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.475444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.475601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.475627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.475782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.475807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.475908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.475933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.476019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.476042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.476133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.476156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.476320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.476344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.476439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.476463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.476570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.476597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.476698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.476723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.476814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.476838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.476989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.477015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.477104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.477127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.477225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.477250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.477338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.477362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.477461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.477488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.477600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.477625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.477710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.477734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.477884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.477908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.478023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.478055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.478150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.478174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.478425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.478450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.621 qpair failed and we were unable to recover it. 00:36:13.621 [2024-12-14 16:49:43.478629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.621 [2024-12-14 16:49:43.478655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.478741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.478766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.478927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.478953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.479044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.479067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.479163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.479189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.479346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.479371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.479466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.479489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.479595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.479623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.479715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.479741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.479855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.479880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.479969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.479992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.480079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.480102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.480278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.480307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.480409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.480431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.480591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.480616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.480711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.480737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.480897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.480923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.481025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.481050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.481205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.481230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.481333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.481358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.481460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.481490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.481653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.481677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.481952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.481977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.482078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.482101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.482203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.482228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.482332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.482356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.482522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.482547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.482800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.482825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.482937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.482962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.483059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.483081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.483175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.483199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.483356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.483381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.483651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.483677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.483839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.483864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.483951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.483974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.484168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.484193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.484303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.484328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.484484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.484509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.484605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.484629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.484721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.622 [2024-12-14 16:49:43.484745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.622 qpair failed and we were unable to recover it. 00:36:13.622 [2024-12-14 16:49:43.484835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.484860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.485078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.485103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.485199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.485222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.485306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.485330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.485432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.485457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.485544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.485574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.485663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.485687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.485782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.485806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.485959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.485984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.486234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.486259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.486348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.486371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.486545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.486577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.486759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.486784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.486882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.486905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.487007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.487032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.487209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.487234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.487391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.487416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.487599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.487625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.487784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.487809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.487963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.487988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.488097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.488124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.488286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.488310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.488412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.488435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.488599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.488626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.488720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.488743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.488847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.488874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.489032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.489057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.489172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.489198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.489366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.489392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.489479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.489503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.489605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.489629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.489734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.489759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.489918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.489943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.490119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.490144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.490242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.623 [2024-12-14 16:49:43.490267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.623 qpair failed and we were unable to recover it. 00:36:13.623 [2024-12-14 16:49:43.490453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.490479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.490602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.490630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.490725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.490749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.490901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.490925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.491074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.491098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.491279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.491309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.491408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.491432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.491528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.491553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.491668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.491700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.492953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.492996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.493184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.493211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.493322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.493347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.493500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.493525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.493764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.493790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.493967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.493992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.494098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.494122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.494278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.494304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.494461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.494486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.494574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.494598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.494719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.494744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.494837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.494861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.494950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.494973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.495055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.495079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.495178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.495204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.495370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.495393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.495495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.495519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.495654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.495679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.495774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.495798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.496017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.496042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.496148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.496171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.496277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.496302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.496394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.496417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.496532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.496564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.496701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.496728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.496843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.496868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.496981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.497005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.497176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.497201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.497299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.497326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.497480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.624 [2024-12-14 16:49:43.497505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.624 qpair failed and we were unable to recover it. 00:36:13.624 [2024-12-14 16:49:43.497607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.497631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.497787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.497813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.497910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.497933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.498026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.498051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.498148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.498171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.498372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.498398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.498497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.498520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.498705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.498731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.498888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.498913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.499017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.499043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.499202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.499227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.499382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.499407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.499570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.499595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.499683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.499707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.499820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.499843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.499955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.499982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.500150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.500175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.500364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.500389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.500487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.500511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.500664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.500690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.500783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.500807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.500926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.500951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.501055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.501080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.501241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.501266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.501354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.501377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.501471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.501494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.501610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.501637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.501757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.501784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.501979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.502004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.502103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.502127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.502210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.502233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.502402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.502429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.502588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.502614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.502773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.502799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.502911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.502940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.503044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.503068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.503159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.503182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.503419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.503444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.503551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.503584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.503680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.625 [2024-12-14 16:49:43.503704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.625 qpair failed and we were unable to recover it. 00:36:13.625 [2024-12-14 16:49:43.503793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.503816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.503980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.504005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.504105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.504129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.504231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.504256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.504361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.504386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.504544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.504594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.504748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.504772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.504870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.504894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.504990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.505015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.505190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.505216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.505309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.505333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.505502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.505526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.505698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.505724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.505823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.505847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.506022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.506048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.506215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.506242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.506332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.506355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.506459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.506485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.506592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.506618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.506712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.506739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.506895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.506920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.507018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.507042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.507147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.507172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.507273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.507296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.507385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.507419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.507511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.507535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.507652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.507676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.507831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.507856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.508015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.508040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.508134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.508157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.508334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.508360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.508461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.508487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.508579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.508604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.508691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.508715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.508806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.508830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.508928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.508957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.509041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.509064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.509231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.509255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.509346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.509370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.626 [2024-12-14 16:49:43.509469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.626 [2024-12-14 16:49:43.509493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.626 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.509645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.509671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.509757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.509781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.510002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.510028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.510123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.510146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.510251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.510277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.510371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.510395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.510564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.510589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.510752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.510775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.510939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.510964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.511064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.511088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.511186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.511209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.511299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.511326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.511434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.511458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.511614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.511641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.511728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.511751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.511840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.511864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.511947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.511972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.512073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.512097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.512249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.512273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.512365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.512389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.512548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.512593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.512754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.512779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.512889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.512921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.513014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.513038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.513139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.513162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.513315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.513340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.513496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.513522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.513695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.513720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.513955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.513981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.514073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.514097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.514268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.514293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.514396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.514421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.514524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.514550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.514712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.514737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.514919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.514944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.515057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.515082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.515172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.515197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.515311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.627 [2024-12-14 16:49:43.515336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.627 qpair failed and we were unable to recover it. 00:36:13.627 [2024-12-14 16:49:43.515439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.515464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.515570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.515595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.515695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.515718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.515877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.515900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.515988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.516012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.516114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.516138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.516236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.516259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.516479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.516504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.516604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.516629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.516724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.516747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.516832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.516855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.516970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.516996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.517099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.517123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.517210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.517234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.517395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.517421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.517508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.517531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.517642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.517666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.517768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.517794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.517906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.517932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.518051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.518077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.518165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.518190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.518291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.518316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.518412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.518440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.518596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.518623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.518719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.518742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.518827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.518857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.518954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.518978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.519076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.519103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.519206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.519229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.519314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.519337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.519450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.519476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.519645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.519671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.519768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.519791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.519943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.519968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.628 [2024-12-14 16:49:43.520065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.628 [2024-12-14 16:49:43.520089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.628 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.520249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.520274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.520378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.520402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.520504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.520528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.520654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.520680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.520776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.520801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.520951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.520976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.521060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.521084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.521175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.521198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.521353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.521378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.521547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.521583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.521679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.521703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.521871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.521896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.521992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.522016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.522111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.522138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.522228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.522251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.522343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.522366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.522470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.522494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.522586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.522620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.522728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.522751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.522838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.522861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.523017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.523046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.523153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.523177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.523263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.523286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.523439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.523464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.523619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.523645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.523739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.523762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.523880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.523905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.524007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.524032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.524122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.524145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.524240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.524264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.524359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.524383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.524545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.524609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.524700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.524724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.524817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.524840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.524934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.524957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.525046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.525072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.525159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.525182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.525365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.525391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.525549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.525580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.629 qpair failed and we were unable to recover it. 00:36:13.629 [2024-12-14 16:49:43.525681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.629 [2024-12-14 16:49:43.525704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.525795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.525817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.525979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.526003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.526088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.526109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.526274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.526297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.526383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.526405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.526495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.526518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.526702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.526726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.526827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.526848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.526937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.526959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.527049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.527073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.527234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.527258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.527338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.527360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.527466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.527489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.527582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.527605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.527716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.527740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.527921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.527943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.528042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.528064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.528166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.528200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.528286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.528312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.528411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.528433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.528514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.528536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.528641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.528664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.528750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.528772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.528851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.528874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.528969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.528991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.529135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.529159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.529316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.529340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.529425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.529448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.529536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.529591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.529685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.529709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.529794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.529816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.529903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.529925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.530017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.530044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.530129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.530151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.530235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.530257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.530358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.530380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.530470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.530494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.530600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.530623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.530705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.630 [2024-12-14 16:49:43.530727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.630 qpair failed and we were unable to recover it. 00:36:13.630 [2024-12-14 16:49:43.530878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.530900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.531062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.531086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.531194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.531216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.531308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.531330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.531589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.531613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.531771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.531794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.531902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.531930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.532028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.532053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.532153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.532177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.532263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.532285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.532388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.532411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.532498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.532520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.532622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.532646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.532748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.532771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.532867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.532892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.532978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.533001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.533090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.533112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.533196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.533218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.533309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.533331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.533483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.533508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.533602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.533625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.533720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.533742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.533849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.533875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.534027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.534051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.534135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.534157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.534254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.534276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.534357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.534379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.534483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.534508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.534608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.534631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.534714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.534736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.534890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.534913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.534995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.535017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.535122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.535145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.535234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.535256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.535418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.535441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.535596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.535620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.535807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.535829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.535925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.535945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.536061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.536085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.631 [2024-12-14 16:49:43.536176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.631 [2024-12-14 16:49:43.536197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.631 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.536292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.536317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.536412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.536433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.536526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.536546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.536634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.536655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.536740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.536761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.536837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.536858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.536943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.536964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.537044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.537068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.537151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.537172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.537251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.537272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.537353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.537373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.537470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.537491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.537638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.537660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.537746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.537767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.537929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.537951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.538190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.538212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.538309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.538329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.538412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.538432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.538526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.538547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.538642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.538663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.538823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.538846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.538934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.538955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.539048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.539068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.539152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.539172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.539267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.539289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.539370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.539391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.539481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.539501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.539599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.539620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.539715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.539741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.539894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.539915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.540065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.540088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.540166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.540188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.540423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.540445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.540601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.540624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.632 [2024-12-14 16:49:43.540709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.632 [2024-12-14 16:49:43.540730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.632 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.540817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.540837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.540920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.540940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.541022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.541043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.541144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.541164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.541249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.541270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.541433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.541454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.541590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.541613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.541709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.541730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.541827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.541849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.542025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.542048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.542129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.542150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.542311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.542333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.542421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.542442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.542637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf9c70 is same with the state(6) to be set 00:36:13.633 [2024-12-14 16:49:43.542917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.542983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.543135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.543172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.543346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.543379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.543494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.543526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.543657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.543690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.543794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.543826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.544005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.544029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.544123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.544144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.544288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.544310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.544409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.544430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.544508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.544528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.544623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.544645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.544731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.544752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.544838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.544859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.544942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.544962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.545040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.545060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.545215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.545238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.545336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.545357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.545508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.545531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.545631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.545653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.545751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.545774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.545856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.545876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.545971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.545994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.546079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.546101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.546189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.546210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.633 qpair failed and we were unable to recover it. 00:36:13.633 [2024-12-14 16:49:43.546294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.633 [2024-12-14 16:49:43.546316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.546414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.546438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.546598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.546624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.546719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.546741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.546825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.546847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.546943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.546965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.547112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.547136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.547223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.547245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.547393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.547416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.547516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.547538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.547660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.547685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.547850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.547873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.547975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.547999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.548099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.548121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.548224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.548248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.548351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.548378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.548467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.548489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.548583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.548612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.548707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.548730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.548880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.548903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.548987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.549009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.549096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.549118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.549209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.549232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.549450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.549473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.549599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.549623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.549720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.549742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.549827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.549850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.550016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.550039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.550124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.550146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.550255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.550277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.550448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.550471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.550576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.550598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.550690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.550714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.550819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.550844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.550927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.550949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.551043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.551072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.551245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.551269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.551354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.551376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.551524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.551548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.551646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.551669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.551816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.551841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.551923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.551945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.552036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.634 [2024-12-14 16:49:43.552058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.634 qpair failed and we were unable to recover it. 00:36:13.634 [2024-12-14 16:49:43.552160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.552183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.552264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.552286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.552376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.552398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.552508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.552533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.552695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.552718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.552811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.552838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.553007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.553031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.553119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.553141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.553233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.553254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.553345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.553368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.553461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.553483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.553656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.553681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.553770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.553792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.553985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.554031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.554209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.554241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.554340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.554371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.554467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.554496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.554607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.554641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.554743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.554773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.554873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.554902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.555020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.555049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.555213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.555242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.555338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.555366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.555460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.555490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.555590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.555619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.555724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.555756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.555924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.555961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.556073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.556104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.556271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.556301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.556409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.556441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.556567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.556599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.556713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.556748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.556851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.556881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.557011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.557043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.557213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.557245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.557362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.557391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.557488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.557511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.557643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.557669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.557758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.557781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.557878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.557902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.558017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.558045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.558131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.635 [2024-12-14 16:49:43.558155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.635 qpair failed and we were unable to recover it. 00:36:13.635 [2024-12-14 16:49:43.558309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.558334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.558419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.558443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.558529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.558552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.558744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.558770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.558875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.558900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.559067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.559092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.559201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.559228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.559392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.559417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.559512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.559535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.559629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.559654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.559745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.559773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.559861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.559885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.559978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.560002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.560093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.560118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.560225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.560255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.560350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.560375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.560458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.560481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.560573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.560598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.560684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.560708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.560792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.560815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.560908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.560932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.561020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.561044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.561199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.561225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.561329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.561354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.561447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.561471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.561575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.561607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.561692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.561716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.561814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.561839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.561926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.561949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.562097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.562121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.562208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.562232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.562326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.562350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.562433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.562456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.562546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.562597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.562681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.562704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.562787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.562810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.562896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.562920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.563010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.636 [2024-12-14 16:49:43.563033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.636 qpair failed and we were unable to recover it. 00:36:13.636 [2024-12-14 16:49:43.563126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.563151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.563240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.563265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.563374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.563398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.563483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.563507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.563595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.563619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.563718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.563741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.563840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.563864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.563953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.563977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.564066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.564090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.564240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.564265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.564371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.564396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.564487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.564510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.564679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.564705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.564861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.564886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.564986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.565014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.565105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.565133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.565239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.565263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.565358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.565381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.565465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.565489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.565577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.565601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.565697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.565721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.565908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.565935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.566021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.566044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.566204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.566229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.566324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.566348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.566501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.566526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.566623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.566648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.566737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.566761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.566852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.566876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.566967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.566990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.567166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.567191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.567280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.567302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.567391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.567415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.567519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.567544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.567707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.567733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.567826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.567849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.567937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.567961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.568119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.568144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.568249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.568274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.568362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.568386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.568553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.568585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.568678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.568701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.568803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.568828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.568932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.568956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.637 [2024-12-14 16:49:43.569125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.637 [2024-12-14 16:49:43.569150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.637 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.569244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.569267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.569351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.569374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.569536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.569570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.569723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.569747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.569838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.569861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.570019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.570043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.570139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.570162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.570257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.570282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.570389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.570416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.570518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.570542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.570660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.570689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.570785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.570809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.570899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.570923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.571021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.571045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.571196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.571224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.571311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.571335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.571444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.571469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.571565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.571595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.571767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.571793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.571885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.571908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.572014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.572039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.572147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.572174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.572270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.572294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.572393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.572422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.572593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.572621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.572718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.572742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.572843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.572867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.573029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.573057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.573213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.573238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.573329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.573353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.638 qpair failed and we were unable to recover it. 00:36:13.638 [2024-12-14 16:49:43.573508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.638 [2024-12-14 16:49:43.573533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.573656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.573682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.573781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.573806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.573901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.573925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.574081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.574106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.574209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.574234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.574318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.574342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.574517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.574547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.574712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.574737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.574943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.575009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.575143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.575179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.575289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.575316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.575410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.575433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.575531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.575555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.575650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.575673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.575766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.575790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.575878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.575907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.576004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.576028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.576126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.576149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.576233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.576257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.576352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.576375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.576468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.576491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.576591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.576615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.576723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.576745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.576898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.576923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.577013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.577036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.577129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.577153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.577238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.577261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.577353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.577375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.577455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.577477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.577564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.577587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.577737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.577760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.577844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.577866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.577964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.577986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.578090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.578116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.578270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.578294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.578541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.578590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.578679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.578702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.578790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.578814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.578904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.578929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.579015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.579037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.579117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.579140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.579222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.579245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.579327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.579348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.579441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.579463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.579540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.579570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.579739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.579761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.579858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.639 [2024-12-14 16:49:43.579880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.639 qpair failed and we were unable to recover it. 00:36:13.639 [2024-12-14 16:49:43.580039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.580071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.580162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.580185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.580275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.580305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.580389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.580412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.580505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.580527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.580757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.580781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.580876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.580899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.580983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.581006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.581114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.581138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.581225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.581248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.581400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.581423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.581646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.581671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.581765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.581788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.581886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.581910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.582016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.582041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.582130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.582152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.582235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.582258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.582347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.582369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.582464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.582486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.582577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.582601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.582683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.582706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.582852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.582876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.582962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.582983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.583151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.583174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.583266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.583288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.583436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.583460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.583547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.583575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.583667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.583694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.583801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.583826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.583927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.583950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.584104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.584127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.584217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.584239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.584334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.584357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.584452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.584480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.584580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.584603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.584692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.584716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.584816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.584838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.584988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.585011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.585095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.585116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.585279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.585302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.585396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.585418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.585516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.640 [2024-12-14 16:49:43.585541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.640 qpair failed and we were unable to recover it. 00:36:13.640 [2024-12-14 16:49:43.585727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.585750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.585902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.585924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.586019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.586041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.586121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.586143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.586219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.586241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.586333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.586356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.586453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.586475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.586597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.586620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.586727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.586749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.586832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.586853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.586934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.586954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.587100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.587119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.587264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.587283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.587444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.587463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.587551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.587576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.587672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.587691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.587783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.587804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.587899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.587919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.588000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.588019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.588165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.588183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.588275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.588296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.588372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.588391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.588475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.588495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.588587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.588607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.588702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.588722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.588802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.588821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.588902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.588925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.589002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.589020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.589105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.589124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.589210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.589229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.589457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.589477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.589552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.589577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.589682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.589700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.589790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.589809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.589894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.589913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.589998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.590017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.590094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.590114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.590197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.590217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.590368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.590387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.590572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.590593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.590728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.590748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.590935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.590954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.591047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.591066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.591167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.591187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.591276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.591294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.591380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.591400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.591491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.591510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.591606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.641 [2024-12-14 16:49:43.591625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.641 qpair failed and we were unable to recover it. 00:36:13.641 [2024-12-14 16:49:43.591712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.591731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.591841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.591860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.591952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.591971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.592052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.592071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.592158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.592177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.592258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.592281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.592426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.592445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.592543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.592567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.592652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.592672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.592768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.592786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.592891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.592911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.593079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.593099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.593175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.593194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.593286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.593305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.593395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.593415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.593567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.593586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.593676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.593696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.593801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.593820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.593905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.593924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.594079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.594140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.594276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.594324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.594446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.594479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.594593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.594616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.594708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.594728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.594819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.594839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.594932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.594951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.595038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.595058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.595157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.595177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.595261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.595281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.595357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.595377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.595452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.595472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.595655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.595685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.595792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.595822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.595938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.595969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.596080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.596111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.596212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.596244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.596353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.596383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.596565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.596595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.596708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.596738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.596854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.596874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.596951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.596972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.597058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.597079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.597291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.597313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.597398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.597418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.598439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.598474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.598740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.598764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.598853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.598880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.599907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.642 [2024-12-14 16:49:43.599943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.642 qpair failed and we were unable to recover it. 00:36:13.642 [2024-12-14 16:49:43.600057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.600079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.600239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.600262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.600411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.600433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.600516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.600537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.600627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.600648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.600731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.600752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.600906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.600945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.601049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.601081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.601193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.601225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.601395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.601427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.601534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.601578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.601700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.601741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.601974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.601997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.602168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.602191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.603764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.603807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.604093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.604118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.604202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.604223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.605286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.605324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.605578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.605604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.605776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.605800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.605884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.605904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.605998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.606019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.606128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.606152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.606300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.606323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.606485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.606508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.606618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.606646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.606736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.606757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.606846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.606867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.607040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.607063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.607238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.607305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.607497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.607533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.607663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.607699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.607809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.607842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.608007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.608040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.608169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.608203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.608317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.608351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.608482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.608514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.608694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.608729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.608915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.608948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.609125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.609147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.609234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.609252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.609396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.609415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.609517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.609553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.609748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.609782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.609886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.609917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.610039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.610073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.610179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.610198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.610273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.643 [2024-12-14 16:49:43.610290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.643 qpair failed and we were unable to recover it. 00:36:13.643 [2024-12-14 16:49:43.610362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.610380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.610477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.610495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.610656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.610677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.610752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.610770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.610852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.610873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.610946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.610965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.611122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.611141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.611215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.611234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.611317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.611335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.611414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.611432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.611506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.611524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.611601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.611620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.611697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.611715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.611800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.611844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.611956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.611987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.612090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.612123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.612240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.612272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.612397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.612430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.612537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.612581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.612690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.612722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.612900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.612932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.613106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.613148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.613226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.613244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.613316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.613334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.613569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.613589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.613683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.613701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.613783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.613801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.613878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.613896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.613981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.613999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.614084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.614103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.614190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.614208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.614312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.614330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.614418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.614437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.614513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.614531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.614632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.614652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.614739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.614758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.614899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.614918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.614989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.615008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.615081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.615099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.615183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.615201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.615341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.615360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.615437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.615456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.615541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.615566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.644 [2024-12-14 16:49:43.615644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.644 [2024-12-14 16:49:43.615663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.644 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.615737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.615755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.615862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.615900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.616068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.616101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.616270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.616303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.616403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.616435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.616643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.616677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.616794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.616826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.616982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.617014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.617225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.617259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.617379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.617411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.617523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.617567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.617676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.617709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.617813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.617845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.617960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.617992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.618100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.618133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.618260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.618292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.618412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.618444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.618595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.618629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.618749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.618781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.618888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.618922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.619044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.619077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.619248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.619281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.619388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.619421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.619591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.619624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.619830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.619863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.619984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.620017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.620118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.620150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.620267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.620300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.620423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.620462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.620579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.620612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.620781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.620814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.620917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.620950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.621055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.621086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.621263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.621297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.621400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.621432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.621599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.621632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.621803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.621835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.622070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.622102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.622230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.622262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.622448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.622481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.622601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.622634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.622738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.645 [2024-12-14 16:49:43.622770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.645 qpair failed and we were unable to recover it. 00:36:13.645 [2024-12-14 16:49:43.622930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.932 [2024-12-14 16:49:43.622998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.932 qpair failed and we were unable to recover it. 00:36:13.932 [2024-12-14 16:49:43.623219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.932 [2024-12-14 16:49:43.623278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.932 qpair failed and we were unable to recover it. 00:36:13.932 [2024-12-14 16:49:43.623461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.932 [2024-12-14 16:49:43.623528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.932 qpair failed and we were unable to recover it. 00:36:13.932 [2024-12-14 16:49:43.623666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.932 [2024-12-14 16:49:43.623701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.932 qpair failed and we were unable to recover it. 00:36:13.932 [2024-12-14 16:49:43.623822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.932 [2024-12-14 16:49:43.623854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.932 qpair failed and we were unable to recover it. 00:36:13.932 [2024-12-14 16:49:43.623970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.932 [2024-12-14 16:49:43.624003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.932 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.624124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.624156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.624261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.624293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.624481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.624515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.624764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.624800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.624918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.624951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.626752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.626809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.627019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.627054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.627181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.627210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.627449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.627479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.627656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.627688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.627810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.627839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.627960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.627988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.628091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.628120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.628282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.628311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.628410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.628439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.628551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.628589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.628717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.628746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.628856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.628886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.629050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.629078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.629190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.629218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.629383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.629412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.629528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.629569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.629689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.629719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.629820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.629850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.629962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.629991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.630162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.630193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.630368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.630398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.630502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.630531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.630652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.630682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.630786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.630815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.630916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.630945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.631056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.631084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.631273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.631302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.631465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.631494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.631727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.631758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.933 [2024-12-14 16:49:43.631878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.933 [2024-12-14 16:49:43.631908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.933 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.632011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.632040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.632226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.632256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.632372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.632401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.632515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.632544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.632672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.632702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.632874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.632903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.633002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.633032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.633131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.633160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.633274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.633304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.633480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.633509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.633710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.633741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.633847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.633876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.634056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.634090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.634200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.634229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.634412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.634442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.634617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.634647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.634767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.634798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.634921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.634954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.635066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.635098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.635211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.635244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.635370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.635403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.635606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.635640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.635807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.635840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.635941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.635974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.636167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.636196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.636293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.636323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.636499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.636529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.636754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.636784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.636904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.636931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.637104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.637132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.637244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.637274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.637460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.637491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.637663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.637693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.637825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.637852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.638010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.638038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.638127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.638154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.638257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.934 [2024-12-14 16:49:43.638285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.934 qpair failed and we were unable to recover it. 00:36:13.934 [2024-12-14 16:49:43.638381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.638408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.638500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.638526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.639729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.639775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.639963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.639993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.640125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.640157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.640353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.640385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.640490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.640523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.640761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.640835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.641008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.641065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.641264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.641299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.641489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.641522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.641680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.641718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.641831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.641864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.642041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.642075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.642196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.642229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.642341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.642373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.642480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.642511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.642692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.642725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.642909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.642941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.643056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.643088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.643327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.643360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.643481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.643512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.643627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.643661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.643783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.643815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.643939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.643972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.644095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.644124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.644221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.644248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.644338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.644367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.644477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.644505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.644631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.644661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.644836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.644864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.644987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.645015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.645194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.645227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.645336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.645370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.645481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.645514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.645714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.645747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.645874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.645908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.646015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.646048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.935 [2024-12-14 16:49:43.646223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.935 [2024-12-14 16:49:43.646251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.935 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.646363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.646390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.646499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.646527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.646633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.646659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.646822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.646848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.647015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.647046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.647161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.647186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.647292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.647318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.647422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.647447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.647543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.647598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.647701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.647732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.647840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.647871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.648086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.648117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.649481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.649533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.649803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.649835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.650072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.650103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.650232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.650260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.650360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.650389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.650501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.650531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.650651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.650680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.650785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.650813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.650916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.650944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.651122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.651153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.651286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.651317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.651437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.651469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.651583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.651616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.651749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.651780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.651893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.651923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.652110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.652139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.652312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.652342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.652456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.652486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.652665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.652696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.652817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.652847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.652968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.652998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.653104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.653135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.653248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.653278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.653383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.653413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.653535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.653604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.653710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.653740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.653855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.653886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.936 qpair failed and we were unable to recover it. 00:36:13.936 [2024-12-14 16:49:43.654072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.936 [2024-12-14 16:49:43.654102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.654283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.654311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.654413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.654442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.654697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.654726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.654828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.654856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.655022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.655050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.655216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.655244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.655346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.655388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.655578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.655611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.655726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.655757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.655938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.655971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.656150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.656182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.656365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.656398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.656502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.656533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.656718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.656749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.656862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.656893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.657085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.657128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.657306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.657335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.657510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.657542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.657695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.657729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.657842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.657873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.658069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.658101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.658214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.658244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.658367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.658397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.658512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.658544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.658667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.658698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.658798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.658828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.658945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.658976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.659157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.659187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.659307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.659338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.659463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.659493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.937 [2024-12-14 16:49:43.659613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.937 [2024-12-14 16:49:43.659646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.937 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.659834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.659865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.659999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.660044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.660163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.660191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.660296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.660324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.660442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.660470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.660568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.660596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.660701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.660742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.660860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.660889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.661070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.661100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.661302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.661330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.661433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.661461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.661577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.661606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.661713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.661742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.661840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.661868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.662027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.662057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.662152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.662181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.662357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.662424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.662621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.662659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.662835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.662878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.663092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.663134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.663265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.663311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.663494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.663526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.663634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.663663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.663840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.663871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.663967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.663997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.664183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.664213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.664378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.664408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.664508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.664537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.664654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.664690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.664818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.664848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.664955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.664984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.665108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.665137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.665236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.665265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.665452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.665481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.665600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.665631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.665790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.665819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.665987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.666017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.666184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.666213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.666467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.666496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.666691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.666722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.938 [2024-12-14 16:49:43.666909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.938 [2024-12-14 16:49:43.666939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.938 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.667037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.667066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.667271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.667327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.667440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.667475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.667713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.667748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.667956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.667989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.668107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.668140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.668246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.668279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.668447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.668479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.668764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.668799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.668906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.668939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.669197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.669230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.669350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.669382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.669576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.669611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.669786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.669818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.669999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.670040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.670144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.670177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.670361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.670394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.670597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.670633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.670768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.670801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.670911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.670943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.671133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.671166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.671356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.671390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.671569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.671602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.671711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.671743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.671930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.671963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.672156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.672189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.672368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.672405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.672518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.672551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.672694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.672727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.672988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.673021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.673133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.673166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.673351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.673384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.673569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.673605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.673724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.673755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.673869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.673900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.674032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.674064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.674178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.674210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.674409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.674448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.674552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.674595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.939 [2024-12-14 16:49:43.674786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.939 [2024-12-14 16:49:43.674818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.939 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.674992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.675024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.675189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.675228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.675330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.675362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.675553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.675594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.675709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.675742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.675855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.675886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.676161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.676192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.676304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.676337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.676515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.676548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.676665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.676697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.676969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.677002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.677104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.677136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.677353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.677384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.677506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.677538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.677690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.677724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.677832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.677864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.677966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.677997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.678125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.678157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.678278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.678309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.678414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.678446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.678681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.678715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.678830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.678862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.679053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.679085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.679215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.679248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.679424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.679456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.679567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.679599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.679839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.679871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.680089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.680122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.680286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.680324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.680426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.680458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.680586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.680620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.680739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.680770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.680941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.680973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.681105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.681138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.681240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.681271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.681388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.681421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.681598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.681632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.940 [2024-12-14 16:49:43.681872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.940 [2024-12-14 16:49:43.681904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.940 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.682142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.682174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.682365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.682397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.682513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.682545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.682744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.682775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.682950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.682980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.683146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.683176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.683355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.683387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.683577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.683608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.683775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.683805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.683975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.684005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.684191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.684231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.684356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.684385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.684505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.684534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.684777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.684809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.684920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.684950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.685060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.685090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.685210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.685241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.685357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.685387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.685514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.685545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.685657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.685688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.685814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.685846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.685964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.685996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.686104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.686135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.686351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.686382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.686588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.686621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.686788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.686819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.686928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.686961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.687071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.687103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.687219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.687251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.687447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.687480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.687743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.687776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.687962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.688000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.688221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.688254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.688450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.941 [2024-12-14 16:49:43.688482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.941 qpair failed and we were unable to recover it. 00:36:13.941 [2024-12-14 16:49:43.688718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.688751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.688866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.688898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.689002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.689034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.689145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.689177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.689347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.689380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.689608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.689641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.689812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.689845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.689964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.689996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.690236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.690269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.690439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.690471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.690646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.690679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.690940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.690973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.691181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.691214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.691318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.691351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.691543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.691587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.691824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.691857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.691988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.692020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.692122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.692154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.692259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.692291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.692488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.692520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.692662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.692695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.692879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.692911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.693014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.693047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.693296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.693330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.693448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.693488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.693671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.693705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.693886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.693919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.694112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.694147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.694275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.694307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.694418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.694451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.694575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.694607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.694725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.694757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.694870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.694908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.695080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.695115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.695305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.695338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.695453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.695488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.695683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.695717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.695887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.695920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.942 [2024-12-14 16:49:43.696087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.942 [2024-12-14 16:49:43.696160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.942 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.696326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.696399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.696536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.696588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.696699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.696733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.696973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.697005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.697198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.697231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.697432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.697472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.697702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.697737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.697862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.697895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.698089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.698121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.698245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.698277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.698401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.698432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.698536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.698577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.698683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.698724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.698835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.698867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.699048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.699080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.699280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.699312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.699483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.699515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.699648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.699681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.699797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.699829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.699933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.699966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.700153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.700185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.700302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.700334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.700506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.700538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.700716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.700748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.700919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.700952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.701064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.701096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.701282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.701314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.701495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.701528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.701676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.701709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.701877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.701909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.702086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.702119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.702229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.702261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.702382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.702414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.702609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.702644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.702772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.702805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.702930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.702962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.703149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.703180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.703295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.703329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.703434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.943 [2024-12-14 16:49:43.703466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.943 qpair failed and we were unable to recover it. 00:36:13.943 [2024-12-14 16:49:43.703602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.703647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.703826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.703859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.704045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.704078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.704249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.704281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.704386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.704419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.704654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.704690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.704862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.704895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.705012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.705045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.705260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.705293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.705425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.705457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.705725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.705760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.705879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.705912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.706015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.706048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.706164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.706206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.706340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.706372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.706538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.706586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.706691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.706723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.706913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.706946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.707114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.707147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.707317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.707349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.707567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.707601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.707711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.707744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.707911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.707944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.708138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.708170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.708288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.708320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.708436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.708468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.708647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.708681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.708798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.708831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.708950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.708981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.709092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.709124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.709303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.709338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.709513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.709546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.709727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.709761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.709871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.709903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.710075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.710107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.710207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.710239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.710424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.710456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.710645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.710678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.944 [2024-12-14 16:49:43.710856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.944 [2024-12-14 16:49:43.710889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.944 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.711057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.711090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.711246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.711318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.711515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.711550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.711686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.711720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.711921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.711956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.712155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.712188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.712415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.712449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.712571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.712606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.712735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.712773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.712894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.712927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.713063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.713096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.713379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.713412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.713519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.713551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.713734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.713773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.713874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.713917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.714042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.714094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.714235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.714269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.714447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.714478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.714650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.714686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.714791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.714823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.714939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.714972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.715229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.715262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.715445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.715476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.715605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.715639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.715814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.715852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.716030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.716063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.716247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.716280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.716495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.716527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.716659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.716692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.716804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.716837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.717012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.717043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.717156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.717189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.717369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.717402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.717584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.717618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.717786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.717818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.717919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.717956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.718126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.945 [2024-12-14 16:49:43.718159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.945 qpair failed and we were unable to recover it. 00:36:13.945 [2024-12-14 16:49:43.718279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.718311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.718478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.718511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.718766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.718800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.719041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.719074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.719255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.719287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.719491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.719523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.719719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.719752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.719935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.719967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.720209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.720242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.720413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.720445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.720579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.720612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.720852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.720884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.720990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.721022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.721211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.721243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.721421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.721453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.721689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.721722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.721840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.721873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.721984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.722022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.722130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.722163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.722287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.722321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.722443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.722475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.722581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.722614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.722794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.722828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.722934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.722966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.723078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.723111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.723217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.723252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.723424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.723457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.723693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.723726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.723974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.724006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.724113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.724145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.724348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.724381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.724520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.724565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.724763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.724795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.946 qpair failed and we were unable to recover it. 00:36:13.946 [2024-12-14 16:49:43.724910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.946 [2024-12-14 16:49:43.724942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.725063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.725108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.725293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.725332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.725453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.725487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.725724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.725759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.725929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.725962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.726075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.726107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.726217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.726249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.726370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.726410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.726685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.726720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.726868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.726902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.727096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.727130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.727311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.727343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.727462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.727496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.727651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.727699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.727818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.727851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.727960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.727992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.728100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.728135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.728307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.728340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.728519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.728553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.728755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.728790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.728904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.728937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.729044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.729077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.729181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.729227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.729400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.729433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.729547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.729591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.729766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.729799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.729908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.729941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.730143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.730177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.730346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.730379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.730497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.730535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.730714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.730748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.730875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.730908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.731011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.731043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.731145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.731178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.731286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.731318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.731524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.731569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.731746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.731779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.731897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.731929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.732111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.947 [2024-12-14 16:49:43.732145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.947 qpair failed and we were unable to recover it. 00:36:13.947 [2024-12-14 16:49:43.732252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.732285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.732453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.732485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.732662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.732704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.732947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.732980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.733093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.733126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.733333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.733366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.733490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.733523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.733765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.733837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.734033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.734070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.734179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.734212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.734382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.734415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.734657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.734700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.734963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.734996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.735104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.735137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.735244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.735276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.735380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.735413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.735679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.735713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.735817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.735848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.735977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.736010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.736202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.736234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.736410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.736443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.736547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.736590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.736704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.736737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.736920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.736952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.737123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.737156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.737398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.737431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.737532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.737574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.737676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.737708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.737967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.738000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.738174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.738205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.738382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.738415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.738527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.738570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.738741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.738772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.738901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.738933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.739122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.739154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.739279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.739311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.739514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.739546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.739669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.739702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.948 [2024-12-14 16:49:43.739902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.948 [2024-12-14 16:49:43.739934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.948 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.740047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.740079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.740249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.740281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.740475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.740507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.740699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.740732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.741011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.741044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.741214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.741247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.741510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.741542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.741725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.741757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.741950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.741983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.742166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.742199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.742390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.742421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.742549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.742594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.742802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.742840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.743018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.743049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.743218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.743251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.743356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.743389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.743500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.743531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.743716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.743750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.743871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.743903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.744082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.744113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.744284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.744316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.744443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.744475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.744579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.744612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.744787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.744819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.745026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.745059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.745251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.745283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.745418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.745450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.745624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.745659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.745773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.745805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.745972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.746005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.746110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.746142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.746257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.746290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.746417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.746449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.746634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.746666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.746789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.746821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.746993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.747026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.747135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.747166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.747426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.747459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.747677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.747711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.949 [2024-12-14 16:49:43.747845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.949 [2024-12-14 16:49:43.747878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.949 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.748095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.748127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.748316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.748348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.748537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.748582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.748749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.748781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.748897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.748930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.749097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.749130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.749304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.749336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.749459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.749491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.749604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.749639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.749825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.749857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.749974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.750007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.750213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.750245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.750366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.750404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.750515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.750548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.750662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.750695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.750889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.750922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.751030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.751063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.751231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.751263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.751389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.751422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.751599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.751633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.751829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.751861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.752046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.752079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.752184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.752217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.752390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.752422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.752594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.752628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.752799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.752832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.752957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.752990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.753117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.753150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.753323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.753356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.753469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.753502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.753631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.753665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.753847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.753879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.754047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.754080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.754254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.754287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.754495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.950 [2024-12-14 16:49:43.754528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.950 qpair failed and we were unable to recover it. 00:36:13.950 [2024-12-14 16:49:43.754660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.754692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.754798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.754830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.754952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.754984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.755097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.755130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.755249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.755281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.755411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.755444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.755618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.755653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.755773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.755807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.756048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.756080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.756333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.756365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.756468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.756500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.756687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.756719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.756890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.756922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.757117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.757150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.757319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.757351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.757539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.757580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.757764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.757796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.757968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.758011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.758200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.758232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.758345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.758378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.758563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.758596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.758832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.758864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.759135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.759166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.759335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.759367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.759580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.759614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.759879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.759910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.760030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.760062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.760235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.760267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.760445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.760477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.760717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.760751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.760992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.761024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.761152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.761184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.761305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.761338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.761602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.761635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.761748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.761780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.761979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.762010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.762125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.762157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.762329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.762361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.762475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.762507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.762635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.762668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.951 qpair failed and we were unable to recover it. 00:36:13.951 [2024-12-14 16:49:43.762842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.951 [2024-12-14 16:49:43.762874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.762992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.763024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.763132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.763164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.763338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.763370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.763486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.763520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.763653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.763687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.763800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.763832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.763941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.763973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.764214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.764246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.764372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.764404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.764521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.764553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.764832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.764864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.764979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.765011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.765301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.765333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.765519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.765551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.765693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.765725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.765908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.765941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.766129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.766166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.766287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.766319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.766489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.766521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.766705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.766739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.766848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.766881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.767070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.767102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.767279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.767311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.767482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.767514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.767708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.767741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.767870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.767904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.768086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.768118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.768234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.768266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.768441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.768474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.768679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.768713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.768841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.768873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.769042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.769075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.769255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.769287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.769529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.769571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.769709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.769741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.770019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.770051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.770228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.770260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.770367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.952 [2024-12-14 16:49:43.770399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.952 qpair failed and we were unable to recover it. 00:36:13.952 [2024-12-14 16:49:43.770502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.770535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.770662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.770696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.770801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.770833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.770999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.771031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.771205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.771237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.771444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.771477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.771654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.771687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.771876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.771908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.772076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.772108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.772210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.772242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.772351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.772383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.772580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.772614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.772710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.772742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.772935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.772967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.773135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.773167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.773433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.773465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.773651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.773685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.773875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.773908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.774089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.774127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.774307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.774340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.774453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.774485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.774676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.774709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.774813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.774844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.774959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.774990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.775231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.775263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.775434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.775467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.775580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.775613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.775786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.775818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.775991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.776023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.776135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.776166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.776285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.776317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.776502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.776535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.776840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.776873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.777040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.777072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.777252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.777283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.777410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.777443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.777634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.777667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.777785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.777816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.777917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.777949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.778059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.778090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.953 [2024-12-14 16:49:43.778278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.953 [2024-12-14 16:49:43.778310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.953 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.778442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.778475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.778772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.778805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.778922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.778954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.779162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.779194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.779320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.779352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.779454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.779485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.779593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.779627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.779756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.779788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.780048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.780079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.780181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.780213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.780328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.780360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.780549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.780592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.780705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.780737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.780849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.780881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.781071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.781103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.781345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.781377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.781581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.781615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.781783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.781822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.782081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.782113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.782300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.782331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.782444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.782476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.782612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.782645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.782897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.782929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.783097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.783129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.783341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.783373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.783479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.783510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.783636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.783670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.783873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.783906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.784123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.784155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.784268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.784301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.784407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.784438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.784680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.784714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.784896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.784928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.954 qpair failed and we were unable to recover it. 00:36:13.954 [2024-12-14 16:49:43.785123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.954 [2024-12-14 16:49:43.785155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.785259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.785291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.785406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.785437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.785608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.785642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.785742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.785775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.785974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.786006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.786178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.786211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.786491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.786523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.786725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.786759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.786928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.786961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.787062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.787094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.787420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.787493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.787744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.787782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.788005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.788039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.788155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.788188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.788381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.788414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.788603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.788638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.788816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.788848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.789055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.789087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.789208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.789240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.789416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.789448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.789621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.789654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.789844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.789876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.790068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.790100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.790334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.790376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.790478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.790511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.790695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.790728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.790897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.790929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.791056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.791089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.791273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.791306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.791484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.955 [2024-12-14 16:49:43.791517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.955 qpair failed and we were unable to recover it. 00:36:13.955 [2024-12-14 16:49:43.791706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.791739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.791860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.791892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.792009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.792041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.792219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.792251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.792356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.792389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.792517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.792550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.792765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.792798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.792912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.792945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.793114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.793146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.793265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.793298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.793425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.793457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.793575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.793609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.793848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.793881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.794048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.794080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.794261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.794294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.794409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.794441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.794650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.794684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.794855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.794888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.795087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.795119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.795384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.795416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.795563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.795596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.795711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.795745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.795910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.795942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.796044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.796076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.796261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.796293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.796393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.796425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.796536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.796575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.796676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.796709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.796873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.796905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.797070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.797102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.797305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.797337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.797531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.797576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.797769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.797802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.797915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.797959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.798225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.798257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.798391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.798423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.798534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.798579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.956 [2024-12-14 16:49:43.798772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.956 [2024-12-14 16:49:43.798805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.956 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.798916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.798949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.799070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.799102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.799365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.799398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.799655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.799689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.799820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.799852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.800024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.800057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.800223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.800256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.800372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.800404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.800580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.800613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.800818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.800850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.801036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.801068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.801189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.801221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.801409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.801440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.801632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.801665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.801792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.801824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.801938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.801969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.802145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.802178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.802282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.802314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.802598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.802631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.802785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.802818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.802988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.803020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.803195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.803227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.803400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.803433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.803639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.803673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.803792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.803823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.804085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.804117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.804219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.804253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.804437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.804469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.804640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.804673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.804898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.804931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.805170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.805202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.805454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.805486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.805654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.805688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.805922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.805954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.806219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.806252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.806452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.806495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.806631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.957 [2024-12-14 16:49:43.806665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.957 qpair failed and we were unable to recover it. 00:36:13.957 [2024-12-14 16:49:43.806858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.806890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.807105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.807137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.807259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.807291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.807460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.807493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.807602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.807636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.807817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.807849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.807953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.807985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.808163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.808195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.808365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.808397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.808590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.808623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.808863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.808895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.809002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.809035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.809211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.809243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.809415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.809447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.809632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.809665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.809774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.809806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.810044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.810076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.810184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.810216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.810457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.810489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.810668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.810701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.810809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.810842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.811017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.811049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.811198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.811230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.811341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.811374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.811568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.811602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.811760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.811832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.812027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.812064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.812260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.812293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.812475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.812508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.812645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.812680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.812870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.812903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.813144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.813177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.813297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.813329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.813504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.813536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.813753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.813786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.813902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.813934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.814105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.814137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.814308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.814341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.814543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.814589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.814718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.814750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.958 [2024-12-14 16:49:43.814862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.958 [2024-12-14 16:49:43.814894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.958 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.815020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.815053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.815234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.815267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.815452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.815485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.815677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.815712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.815821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.815853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.816116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.816149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.816315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.816348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.816524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.816565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.816752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.816785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.816896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.816929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.817032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.817065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.817173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.817211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.817377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.817411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.817575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.817607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.817714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.817746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.817872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.817904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.818104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.818136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.818304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.818336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.818530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.818576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.818747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.818780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.819018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.819050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.819231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.819264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.819397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.819430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.819599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.819633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.819735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.819768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.819947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.819980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.820191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.820224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.820415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.820448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.820553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.820596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.820704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.820736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.820906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.820939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.821059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.821091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.821206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.821238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.821473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.821506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.821790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.821823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.959 qpair failed and we were unable to recover it. 00:36:13.959 [2024-12-14 16:49:43.821997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.959 [2024-12-14 16:49:43.822030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.822143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.822175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.822291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.822324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.822496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.822528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.822735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.822768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.823026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.823058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.823297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.823330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.823506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.823538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.823670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.823704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.823967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.824000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.824170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.824202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.824313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.824345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.824534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.824575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.824695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.824728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.824906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.824938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.825051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.825084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.825251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.825284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.825455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.825489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.825661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.825695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.825863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.825895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.825994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.826026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.826293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.826325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.826587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.826622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.826794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.826827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.826941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.826974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.827140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.827172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.827353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.827385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.827574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.827607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.827777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.827809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.827910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.827942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.828114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.828147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.828272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.828304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.828487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.828520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.828644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.828678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.828846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.828879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.828990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.829023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.829261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.829293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.829477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.829510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.829635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.829668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.829820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.829852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.830024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.960 [2024-12-14 16:49:43.830056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.960 qpair failed and we were unable to recover it. 00:36:13.960 [2024-12-14 16:49:43.830159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.830191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.830386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.830418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.830599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.830633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.830809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.830848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.830957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.830990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.831180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.831213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.831328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.831360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.831565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.831599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.831834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.831867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.832050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.832083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.832262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.832295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.832522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.832555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.832735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.832766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.832869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.832901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.833135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.833168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.833288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.833319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.833423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.833454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.833570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.833604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.833726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.833759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.833964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.833996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.834186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.834218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.834338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.834371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.834540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.834595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.834699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.834731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.834912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.834945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.835112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.835144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.835317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.835350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.835518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.835550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.835747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.835780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.835962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.835994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.836129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.836161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.836270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.836304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.836411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.836444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.836546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.836590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.836775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.836807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.836920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.961 [2024-12-14 16:49:43.836951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.961 qpair failed and we were unable to recover it. 00:36:13.961 [2024-12-14 16:49:43.837204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.837237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.837415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.837448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.837578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.837612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.837788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.837820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.838012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.838044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.838227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.838259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.838372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.838404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.838515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.838548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.838692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.838730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.838897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.838928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.839110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.839143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.839402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.839435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.839620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.839653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.839773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.839805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.839918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.839950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.840134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.840166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.840349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.840381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.840503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.840535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.840649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.840682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.840801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.840833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.840998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.841030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.841130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.841162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.841426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.841459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.841579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.841612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.841781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.841814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.841926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.841958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.842124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.842156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.842252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.842285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.842455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.842488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.842665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.842698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.842861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.842894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.843060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.843092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.843296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.843329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.843506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.843539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.843793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.843826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.843947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.843985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.844225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.962 [2024-12-14 16:49:43.844258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-12-14 16:49:43.844358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.844390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.844499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.844531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.844742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.844775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.844958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.844991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.845101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.845134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.845251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.845283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.845467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.845499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.845680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.845714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.845903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.845935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.846055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.846087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.846279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.846311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.846600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.846634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.846826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.846859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.846963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.846996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.847181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.847213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.847388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.847421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.847546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.847587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.847766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.847799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.847968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.848001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.848105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.848136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.848302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.848335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.848452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.848485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.848655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.848689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.848889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.848922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.849025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.849057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.849176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.849208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.849381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.849414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.849679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.849713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.849826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.849858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.850082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.850115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.850322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.850355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.850540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.850585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.850699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.850731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.850967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.850999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.851173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.851206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.851455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.851488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.851697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.851731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.851934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.851967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-12-14 16:49:43.852138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.963 [2024-12-14 16:49:43.852170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.852348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.852385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.852562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.852596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.852797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.852830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.852942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.852975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.853086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.853119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.853241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.853274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.853448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.853480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.853674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.853708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.853877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.853910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.854080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.854112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.854230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.854262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.854364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.854396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.854604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.854637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.854831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.854864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.854999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.855032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.855319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.855351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.855463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.855495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.855699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.855733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.855922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.855955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.856150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.856182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.856445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.856477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.856594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.856628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.856799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.856831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.856941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.856974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.857083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.857116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.857218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.857249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.857509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.857542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.857665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.857704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.857824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.857856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.858027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.858059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.858174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.858206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.858381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.858414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.964 qpair failed and we were unable to recover it. 00:36:13.964 [2024-12-14 16:49:43.858600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.964 [2024-12-14 16:49:43.858634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.858806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.858839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.859029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.859061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.859232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.859264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.859383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.859416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.859586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.859619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.859787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.859820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.859992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.860024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.860203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.860235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.860378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.860412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.860599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.860633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.860871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.860904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.861022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.861055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.861240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.861272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.861439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.861471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.861587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.861621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.861791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.861823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.861922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.861954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.862204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.862237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.862484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.862516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.862718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.862753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.863015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.863047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.863154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.863186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.863359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.863392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.863567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.863601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.863768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.863800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.863914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.863947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.864189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.864221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.864482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.864514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.864648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.864681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.864852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.864885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.864997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.865029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.865266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.865298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.865416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.865448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.865667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.865701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.865874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.865907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.866091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.866129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.965 [2024-12-14 16:49:43.866239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.965 [2024-12-14 16:49:43.866271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.965 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.866392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.866424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.866591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.866625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.866745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.866778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.867034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.867067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.867271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.867304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.867411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.867443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.867574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.867607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.867776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.867809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.867926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.867958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.868135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.868167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.868276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.868308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.868422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.868455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.868631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.868664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.868836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.868868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.869040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.869073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.869253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.869286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.869403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.869436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.869628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.869662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.869858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.869891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.870100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.870133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.870323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.870355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.870555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.870613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.870786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.870819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.871008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.871040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.871278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.871310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.871570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.871609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.871795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.871827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.871940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.871972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.872155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.872187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.872380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.872413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.872515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.872548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.872777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.872809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.872998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.873031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.873213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.873246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.873362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.873394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.873577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.873612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.873731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.873764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.966 qpair failed and we were unable to recover it. 00:36:13.966 [2024-12-14 16:49:43.873933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.966 [2024-12-14 16:49:43.873963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.874198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.874230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.874340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.874373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.874550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.874613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.874787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.874819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.875005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.875038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.875154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.875186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.875364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.875397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.875606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.875641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.875814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.875847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.876038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.876070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.876173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.876205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.876387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.876419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.876533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.876574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.876684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.876717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.876923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.876956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.877065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.877098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.877223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.877254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.877365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.877398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.877516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.877548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.877738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.877769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.877942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.877975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.878167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.878200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.878396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.878427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.878547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.878587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.878784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.878817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.878983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.879015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.879122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.879155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.879255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.879288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.879484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.879522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.879719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.879753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.879862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.879895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.880003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.880035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.880156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.880188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.880454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.880487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.880675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.880709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.880956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.880989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.881109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.881141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.881271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.881303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.881473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.881506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.881624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.881658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.967 [2024-12-14 16:49:43.881829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.967 [2024-12-14 16:49:43.881862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.967 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.882063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.882096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.882212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.882246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.882351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.882384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.882567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.882601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.882719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.882751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.882944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.882977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.883233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.883265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.883456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.883487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.883722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.883756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.883861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.883893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.884008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.884040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.884299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.884331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.884525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.884577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.884707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.884740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.884920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.884951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.885127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.885160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.885337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.885370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.885577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.885611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.885783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.885817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.886012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.886045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.886225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.886257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.886435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.886467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.886684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.886718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.886973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.887005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.887114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.887147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.887409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.887441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.887542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.887583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.887802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.887834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.887948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.887980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.888177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.888210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.888315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.888347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.888467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.888499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.888691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.888723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.888985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.889018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.889190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.889223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.889389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.889422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.889539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.889580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.889757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.889789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.889907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.889939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.890058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.890091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.890195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.968 [2024-12-14 16:49:43.890225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.968 qpair failed and we were unable to recover it. 00:36:13.968 [2024-12-14 16:49:43.890424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.890457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.890576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.890611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.890726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.890759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.890930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.890963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.891141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.891173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.891446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.891479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.891702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.891736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.892001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.892035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.892273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.892305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.892477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.892510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.892687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.892720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.892933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.892965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.893114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.893146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.893326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.893359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.893477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.893516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.893717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.893751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.893862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.893894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.894132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.894164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.894292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.894325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.894575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.894607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.894726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.894758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.894944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.894976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.895090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.895123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.895300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.895334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.895467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.895499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.895639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.895672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.895785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.895817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.897322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.897380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.897678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.897715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.897977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.898010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.898138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.898174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.898386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.898416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.898623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.898656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.969 qpair failed and we were unable to recover it. 00:36:13.969 [2024-12-14 16:49:43.898771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.969 [2024-12-14 16:49:43.898804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.898905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.898936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.899100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.899132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.899391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.899423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.899534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.899575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.899755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.899786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.899916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.899947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.900070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.900100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.900211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.900242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.900440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.900473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.900592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.900624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.900743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.900774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.900902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.900934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.901037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.901068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.901177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.901209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.901401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.901432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.901612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.901646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.901881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.901913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.902026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.902057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.902174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.902206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.902324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.902356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.902474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.902506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.902793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.902864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.903123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.903195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.903432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.903467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.903584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.903619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.903739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.903772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.904064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.904095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.904280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.904312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.904418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.904450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.904580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.904613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.904779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.904810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.905005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.905036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.905310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.905341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.905548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.905592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.905706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.905738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.905852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.905884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.905990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.906022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.906214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.906246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.906428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.906459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.906722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.970 [2024-12-14 16:49:43.906755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.970 qpair failed and we were unable to recover it. 00:36:13.970 [2024-12-14 16:49:43.906942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.906975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.907160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.907194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.907361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.907392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.907591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.907624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.907739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.907770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.907881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.907912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.908079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.908110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.908221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.908253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.908393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.908424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.908548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.908589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.908698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.908730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.908907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.908939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.909055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.909086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.909190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.909221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.909344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.909376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.909509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.909540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.909801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.909833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.909953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.909986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.910103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.910134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.910300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.910332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.910499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.910532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.910657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.910695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.910948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.910980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.911163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.911195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.911305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.911336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.911518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.911550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.911740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.911773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.911898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.911930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.912098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.912130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.912300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.912332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.912530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.912571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.912674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.912706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.912895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.912927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.913100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.913132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.913272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.913303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.913480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.913510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.913707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.913742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.913863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.971 [2024-12-14 16:49:43.913894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.971 qpair failed and we were unable to recover it. 00:36:13.971 [2024-12-14 16:49:43.914001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.914032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.914221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.914253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.914447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.914479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.914599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.914633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.914748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.914780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.914968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.914999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.915167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.915199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.915372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.915404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.915607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.915639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.915807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.915839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.915959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.915990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.916174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.916206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.916388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.916419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.916546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.916586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.916705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.916737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.916930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.916962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.917170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.917201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.917385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.917417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.917539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.917579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.917685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.917716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.917830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.917862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.918030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.918062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.918171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.918203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.918309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.918346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.918474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.918506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.918738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.918770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.918937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.918969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.919081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.919113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.919299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.919330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.919444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.919475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.919613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.919647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.919771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.919803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.919925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.919957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.920071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.920102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.920270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.920301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.920441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.920472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.920590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.920623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.920737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.920769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.920890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.920922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.921026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.921057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.972 qpair failed and we were unable to recover it. 00:36:13.972 [2024-12-14 16:49:43.921161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.972 [2024-12-14 16:49:43.921193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.921361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.921393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.921498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.921529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.921650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.921682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.921875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.921907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.922049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.922080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.922193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.922224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.922332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.922363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.922532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.922573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.922757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.922788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.922899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.922931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.923039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.923070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.923180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.923212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.923312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.923343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.923513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.923544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.923650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.923682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.923794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.923826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.924016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.924047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.924145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.924177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.924349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.924380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.924531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.924575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.924778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.924809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.924982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.925013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.925179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.925216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.925317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.925348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.925516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.925547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.925762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.925791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.925982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.926013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.973 qpair failed and we were unable to recover it. 00:36:13.973 [2024-12-14 16:49:43.926118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.973 [2024-12-14 16:49:43.926149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.926265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.926296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.926409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.926440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.926573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.926606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.926718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.926748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.926845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.926876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.926977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.927008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.927125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.927155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.927271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.927303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.927407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.927438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.927615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.927647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.927836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.927866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.927965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.927997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.928112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.928142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.928247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.928277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.928385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.928417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.928619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.928650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.928838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.928869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.929038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.929069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.929181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.929211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.929320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.929351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.929534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.929572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.929768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.929800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.930009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.930040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.930161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.930191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.930360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.930390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.930497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.930528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.930644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.930676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.930778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.930808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.930925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.930955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.931070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.931100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.931269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.931300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.931412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.931443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.931552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.931593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.931693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.931724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.931830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.931867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.932051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.974 [2024-12-14 16:49:43.932082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.974 qpair failed and we were unable to recover it. 00:36:13.974 [2024-12-14 16:49:43.932247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.932277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.932379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.932410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.932583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.932616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.932734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.932764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.932867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.932898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.933063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.933093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.933194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.933224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.933334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.933365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.933477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.933507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.933657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.933690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.933802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.933829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.933925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.933954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.934130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.934158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.934334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.934363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.934547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.934598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.934821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.934851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.934958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.934985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.935093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.935121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.935230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.935258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.935440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.935468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.935584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.935613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.935718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.935746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.935925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.935953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.936049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.936077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.936181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.936208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.936400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.936428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.936529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.936565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.936748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.936775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.936941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.936969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.937140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.937168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.937298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.937325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.937424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.937452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.937631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.937660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.937756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.937784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.937954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.975 [2024-12-14 16:49:43.937982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.975 qpair failed and we were unable to recover it. 00:36:13.975 [2024-12-14 16:49:43.938077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.938104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.938284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.938315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.938491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.938521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.938632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.938670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.938915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.938946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.939218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.939248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.939349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.939379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.939615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.939647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.939908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.939938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.940125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.940153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.940248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.940275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.940515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.940543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.940673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.940701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.940893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.940923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.941037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.941067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.941196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.941227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.941337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.941367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.941541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.941582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.941758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.941789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.941895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.941937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.942042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.942070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.942165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.942194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.942447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.942475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.942628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.942657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.942830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.942862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.943051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.943082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.943347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.943379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.943493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.943524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.943701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.943733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.943833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.943864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.943972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.944004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.944178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.944208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.944322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.944353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.944523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.944554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.944731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.944763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.976 [2024-12-14 16:49:43.944864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.976 [2024-12-14 16:49:43.944896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.976 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.945064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.945095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.945261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.945293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.945459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.945491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.945601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.945633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.945810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.945841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.945958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.945989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.946108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.946139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.946313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.946350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.946452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.946483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.946667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.946701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.946870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.946902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.947113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.947144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.947382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.947413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.947516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.947547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.947738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.947769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.948020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.948050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.948167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.948198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.948305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.948336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.948460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.948491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.948665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.948697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.948885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.948917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.949041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.949072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.949170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.949201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.949323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.949354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.949539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.949580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.949705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.949736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.949939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.949970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.950207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.950238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.950346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.950378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.950484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.950515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.950635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.950667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.950775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.950805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.950977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.951009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.951121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.951153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.951331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.951362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.951531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.951571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.951676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.951706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.951876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.951906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.952094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.952125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.977 qpair failed and we were unable to recover it. 00:36:13.977 [2024-12-14 16:49:43.952291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.977 [2024-12-14 16:49:43.952321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.952432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.952462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.952655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.952688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.952860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.952890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.953060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.953091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.953263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.953294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.953485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.953515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.953656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.953688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.953893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.953929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.954042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.954072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.954184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.954214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.954380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.954412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.954531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.954569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.954742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.954773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.954951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.954982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.955108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.955138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.955309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.955340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.955453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.955484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.955594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.955625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.955805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.955837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.956028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.956058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.956222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.956253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.956495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.956527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.956662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.956694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.956878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.956909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.957013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.957044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.957209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.957240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.957414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.957445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.957615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.957648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.957925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.957957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.958192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.978 [2024-12-14 16:49:43.958224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.978 qpair failed and we were unable to recover it. 00:36:13.978 [2024-12-14 16:49:43.958323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.958354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.958461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.958491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.958665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.958698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.958878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.958909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.959030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.959062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.959230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.959261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.959452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.959483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.959654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.959687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.959803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.959834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.960090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.960121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.960377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.960409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.960577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.960609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.960801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.960833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.960945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.960977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.961088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.961119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.961359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.961390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.961581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.961615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.961794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.961832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.961953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.961984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.962098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.962129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.962244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.962275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.962457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.962488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.962608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.962640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.962822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.962854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.963032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.963063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.963249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.963280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.963448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.979 [2024-12-14 16:49:43.963479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.979 qpair failed and we were unable to recover it. 00:36:13.979 [2024-12-14 16:49:43.963590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.963622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.963725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.963755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.964013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.964044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.964151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.964183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.964379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.964411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.964527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.964588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.964699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.964731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.964974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.965006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.965203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.965234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.965470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.965501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.965698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.965731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.965833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.965864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.965973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.966004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.966122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.966154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.966288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.966319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.966494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.966525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.966705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.966738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.966868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.966900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.967007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.967039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.967207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.967238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.967407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.967438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.967608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.967641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.967767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.967798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.968056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.968088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.968256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.968288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.968404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.968435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.968617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.968649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.968887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.968918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.969039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.969070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.969194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.969226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.969340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.969378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.969552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.969609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.969738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.969769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.970023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.970055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.970231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.970262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.980 [2024-12-14 16:49:43.970379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.980 [2024-12-14 16:49:43.970410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.980 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.970522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.970554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.970757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.970788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.970958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.970990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.971091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.971122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.971325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.971355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.971468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.971501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.971696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.971728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.971917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.971949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.972060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.972092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.972201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.972232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.972340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.972371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.972493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.972525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.972709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.972742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.972862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.972894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.973008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.973039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.973170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.973201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.973348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.973379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.973581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.973626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.973748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.973780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.973971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.974004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.974119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.974150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.974345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.974377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.974495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.974527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.974786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.974858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.977715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.977790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.978122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.978162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.978360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.978394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.978581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.978615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.978794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.978827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.978960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.978992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.979123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.979156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.979324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.979356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.979530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.979572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.979694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.979727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.979844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.979885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.980056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.980088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.980260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.980292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.980403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.980436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.980544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.980592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.980723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.981 [2024-12-14 16:49:43.980755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.981 qpair failed and we were unable to recover it. 00:36:13.981 [2024-12-14 16:49:43.980870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.980902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.981032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.981063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.981164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.981195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.981298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.981330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.981497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.981529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.981696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.981767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.981963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.982000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.982249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.982282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.982403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.982435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.982701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.982737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.982925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.982958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.983214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.983246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.983435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.983466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.983579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.983611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.983719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.983750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.983866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.983897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.984102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.984133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.984255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.984287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.984463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.984494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.984663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.984696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.984867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.984899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.985101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.985140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.985241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.985272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.985375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.985407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.985601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.985635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.985750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.985782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.985885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.985917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.986157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.986189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.986295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.986327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.986433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.986465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.986580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.986612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.986719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.986750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.986921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.986953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.987204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.987235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.987337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.987368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.987487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.987523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.987649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.987682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.987873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.987905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.988159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.988192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.988451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.988483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.988653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.988686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.988888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.988920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.982 qpair failed and we were unable to recover it. 00:36:13.982 [2024-12-14 16:49:43.989029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.982 [2024-12-14 16:49:43.989060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.983 qpair failed and we were unable to recover it. 00:36:13.983 [2024-12-14 16:49:43.989171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.983 [2024-12-14 16:49:43.989203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.983 qpair failed and we were unable to recover it. 00:36:13.983 [2024-12-14 16:49:43.989307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.983 [2024-12-14 16:49:43.989339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.983 qpair failed and we were unable to recover it. 00:36:13.983 [2024-12-14 16:49:43.989514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.983 [2024-12-14 16:49:43.989546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.983 qpair failed and we were unable to recover it. 00:36:13.983 [2024-12-14 16:49:43.989669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.983 [2024-12-14 16:49:43.989701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.983 qpair failed and we were unable to recover it. 00:36:13.983 [2024-12-14 16:49:43.989888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.983 [2024-12-14 16:49:43.989919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:13.983 qpair failed and we were unable to recover it. 00:36:13.983 [2024-12-14 16:49:43.990090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:13.983 [2024-12-14 16:49:43.990128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.268 qpair failed and we were unable to recover it. 00:36:14.268 [2024-12-14 16:49:43.990317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.268 [2024-12-14 16:49:43.990349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.268 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.990471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.990503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.990621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.990654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.990895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.990927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.991038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.991070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.991180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.991212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.991389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.991421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.991601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.991635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.991839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.991871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.991980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.992012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.992114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.992147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.992341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.992373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.992636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.992670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.992864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.992896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.993015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.993047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.993170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.993202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.993304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.993335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.993539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.993580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.993815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.993847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.994096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.994127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.994244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.994275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.994479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.994510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.994727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.994761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.994934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.994966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.995158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.995189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.995363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.995394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.995586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.995620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.995795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.995826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.995936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.995968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.996068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.996100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.996218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.996249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.996363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.996394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.996511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.996542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.996806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.996837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.996952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.996983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.997148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.997180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.997428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.997460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.997569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.997601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-12-14 16:49:43.997732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.269 [2024-12-14 16:49:43.997764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.997941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.997979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.998150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.998181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.998311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.998342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.998533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.998572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.998701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.998732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.998907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.998938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.999058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.999089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.999204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.999235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.999342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.999374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.999486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.999517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.999664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.999697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.999813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.999844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:43.999963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:43.999994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.000097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.000130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.000309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.000341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.000509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.000542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.000733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.000766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.000976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.001007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.001197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.001229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.001330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.001362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.001470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.001502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.001699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.001734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.001836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.001867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.002123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.002154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.002362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.002393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.002601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.002635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.002826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.002857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.003099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.003131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.003370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.003401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.003509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.003540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.003662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.003694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.003865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.003896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.004015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.004046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.004149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.004180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.004303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.004335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.004518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.004549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.004675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.004706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.004816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.270 [2024-12-14 16:49:44.004847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.270 qpair failed and we were unable to recover it. 00:36:14.270 [2024-12-14 16:49:44.005036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.005068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.005176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.005207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.005384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.005422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.005598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.005631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.005736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.005767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.005937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.005968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.006141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.006173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.006278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.006309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.006537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.006590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.006708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.006739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.006947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.006978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.007087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.007118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.007284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.007315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.007418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.007450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.007566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.007598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.007721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.007753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.007946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.007978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.008083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.008115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.008226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.008258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.008367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.008398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.008585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.008616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.008723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.008753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.008929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.008959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.009059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.009089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.009274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.009306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.009407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.009438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.009552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.009596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.009795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.009826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.009936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.009967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.010221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.010290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.010518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.010608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.010751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.010787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.010972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.011006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.011221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.011253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.011355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.011386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.011580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.011614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.011808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.011839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.012010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.012042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.012172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.271 [2024-12-14 16:49:44.012204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.271 qpair failed and we were unable to recover it. 00:36:14.271 [2024-12-14 16:49:44.012322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.012353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.012486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.012517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.012647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.012679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.012844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.012886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.012998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.013030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.013199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.013229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.013344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.013375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.013579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.013612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.013783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.013815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.013985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.014016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.014197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.014228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.014433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.014463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.014636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.014675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.014851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.014884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.015113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.015145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.015326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.015357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.015616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.015649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.015918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.015950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.016124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.016155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.016272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.016303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.016471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.016503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.016691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.016723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.016899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.016930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.017048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.017080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.017254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.017286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.017402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.017434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.017544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.017582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.017698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.017729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.017831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.017863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.017966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.017997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.018238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.018309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.018501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.018538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.018821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.018857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.019032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.019064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.019250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.019282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.019482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.019513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.019639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.019672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.019804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.272 [2024-12-14 16:49:44.019836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.272 qpair failed and we were unable to recover it. 00:36:14.272 [2024-12-14 16:49:44.019942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.019974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.020072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.020104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.020269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.020301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.020483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.020515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.020699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.020731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.020953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.020985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.021190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.021223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.021443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.021475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.021596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.021630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.021819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.021850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.021959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.021991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.022248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.022281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.022397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.022428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.022537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.022589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.022814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.022846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.023018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.023050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.023288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.023320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.023438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.023470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.023642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.023676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.023789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.023821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.024064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.024095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.024269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.024301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.024414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.024445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.024649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.024682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.024879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.024910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.025098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.025130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.025324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.025361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.025547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.025588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.025710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.025743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.025914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.025946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.026116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.026147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.026356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.026388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.273 [2024-12-14 16:49:44.026520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.273 [2024-12-14 16:49:44.026566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.273 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.026748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.026781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.026963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.026995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.027110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.027142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.027256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.027287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.027386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.027418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.027587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.027620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.027725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.027757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.027868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.027900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.028011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.028042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.028152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.028184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.028388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.028419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.028599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.028631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.028744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.028776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.029029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.029061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.029163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.029195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.029361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.029392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.029580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.029612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.029785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.029818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.029928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.029960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.030127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.030159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.030394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.030426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.030600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.030633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.030802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.030834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.031003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.031036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.031140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.031172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.031301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.031333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.031443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.031475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.031664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.031698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.031808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.031840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.032073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.032105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.032228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.032259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.032376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.032407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.032575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.032607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.032775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.032806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.032907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.032939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.033042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.033074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.033259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.033291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.033406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.033438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.274 [2024-12-14 16:49:44.033555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.274 [2024-12-14 16:49:44.033595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.274 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.033765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.033804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.033918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.033949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.034241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.034273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.034457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.034490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.034728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.034761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.034877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.034909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.035029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.035061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.035164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.035196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.035305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.035336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.035455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.035487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.035595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.035626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.035814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.035846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.036015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.036048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.036179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.036211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.036387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.036419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.036537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.036592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.036716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.036748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.036917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.036949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.037152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.037184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.037366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.037397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.037574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.037607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.037777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.037809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.037975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.038006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.038175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.038208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.038326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.038358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.038603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.038636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.038843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.038875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.039081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.039113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.039304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.039336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.039448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.039480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.039651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.039686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.039793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.039826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.040080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.040111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.040344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.040376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.040565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.040598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.040908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.040939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.041046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.041077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.041276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.041309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.275 [2024-12-14 16:49:44.041495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.275 [2024-12-14 16:49:44.041527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.275 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.041794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.041826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.041937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.041979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.042153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.042185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.042297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.042328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.042585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.042618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.042794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.042824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.042992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.043024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.043200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.043231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.043346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.043378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.043546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.043602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.043806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.043837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.044113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.044145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.044275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.044307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.044412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.044444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.044612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.044645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.044820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.044852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.045028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.045060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.045184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.045215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.045386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.045417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.045518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.045550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.045730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.045762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.045932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.045964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.046064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.046096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.046218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.046249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.046418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.046450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.046722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.046755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.046864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.046896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.047003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.047036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.047210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.047242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.047497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.047529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.047717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.047749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.047915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.047947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.048115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.048146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.048259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.048291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.048403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.048435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.048547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.048591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.048853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.048885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.049060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.049092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.276 [2024-12-14 16:49:44.049256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.276 [2024-12-14 16:49:44.049288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.276 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.049406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.049438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.049617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.049650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.049775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.049813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.049930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.049962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.050063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.050095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.050209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.050241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.050340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.050372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.050549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.050591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.050775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.050807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.050984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.051015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.051206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.051238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.051454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.051486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.051591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.051624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.051806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.051838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.052011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.052042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.052171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.052203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.052313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.052345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.052447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.052479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.052645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.052677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.052864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.052896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.053029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.053061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.053194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.053226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.053400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.053432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.053554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.053598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.053720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.053751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.053918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.053950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.054117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.054148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.054271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.054303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.054424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.054456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.054649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.054683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.054880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.054912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.055129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.055161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.055288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.055319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.055486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.277 [2024-12-14 16:49:44.055518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.277 qpair failed and we were unable to recover it. 00:36:14.277 [2024-12-14 16:49:44.055651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.055685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.055784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.055816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.055985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.056017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.056141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.056172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.056361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.056392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.056636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.056669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.056840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.056872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.057133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.057165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.057268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.057305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.057502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.057533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.057650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.057681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.057867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.057899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.058008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.058041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.058169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.058201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.058310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.058342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.058621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.058654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.058830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.058861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.059030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.059062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.059167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.059199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.059393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.059425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.059540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.059594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.059768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.059800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.059911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.059943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.060062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.060094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.060212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.060243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.060432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.060464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.060645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.060679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.060792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.060823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.060999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.061031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.061205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.061238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.061418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.061449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.061573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.061605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.061775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.061807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.062068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.062099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.062206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.062239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.062449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.062481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.062611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.062644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.062852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.062884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.278 [2024-12-14 16:49:44.063048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.278 [2024-12-14 16:49:44.063080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.278 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.063246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.063278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.063406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.063437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.063637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.063671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.063793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.063825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.064007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.064040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.064208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.064240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.064342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.064374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.064474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.064506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.064620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.064652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.064828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.064865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.065050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.065082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.065274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.065306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.065483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.065516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.065629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.065663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.065834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.065866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.066046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.066078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.066246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.066278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.066459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.066490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.066662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.066696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.066801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.066833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.066973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.067005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.067173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.067204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.067303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.067334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.067509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.067540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.067743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.067774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.067898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.067930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.068105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.068136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.068247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.068280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.068390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.068422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.068596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.068628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.068798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.068829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.068932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.068963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.069075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.069107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.069214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.069245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.069355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.069386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.069564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.069596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.069821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.069891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.070172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.070208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.279 [2024-12-14 16:49:44.070388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.279 [2024-12-14 16:49:44.070421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.279 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.070539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.070583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.070795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.070826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.070929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.070960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.071127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.071158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.071326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.071357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.071468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.071500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.071768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.071802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.071982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.072013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.072183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.072215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.072331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.072363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.072487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.072527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.072741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.072774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.072887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.072919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.073085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.073116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.073314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.073346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.073448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.073480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.073667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.073704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.073818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.073850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.074029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.074061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.074280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.074311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.074440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.074471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.074589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.074622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.074867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.074899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.075008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.075040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.075317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.075350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.075521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.075553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.075737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.075769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.075873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.075905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.076072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.076103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.076216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.076248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.076351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.076382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.076619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.076652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.076780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.076813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.077095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.077127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.077383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.077415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.077529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.077572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.077703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.077735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.077945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.077978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.280 [2024-12-14 16:49:44.078080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.280 [2024-12-14 16:49:44.078113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.280 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.078213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.078244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.078504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.078537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.078672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.078706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.078875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.078906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.079008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.079040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.079161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.079194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.079305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.079336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.079444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.079476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.079645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.079679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.079786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.079818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.079919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.079950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.080127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.080164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.080285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.080317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.080500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.080532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.080707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.080738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.080908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.080940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.081052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.081084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.081193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.081225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.081402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.081434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.081545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.081588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.081792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.081823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.081946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.081977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.082145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.082177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.082434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.082465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.082583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.082616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.082735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.082767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.082884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.082915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.083099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.083131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.083397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.083429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.083534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.083585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.083701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.083732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.083837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.083869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.083988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.084020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.084200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.084232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.084492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.084524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.084662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.084695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.084805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.084837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.084954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.084987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.281 [2024-12-14 16:49:44.085217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.281 [2024-12-14 16:49:44.085289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.281 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.085510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.085546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.085690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.085724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.085826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.085859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.085972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.086003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.086116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.086147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.086386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.086419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.086545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.086592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.086800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.086832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.086935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.086967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.087136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.087167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.087279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.087311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.087419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.087451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.087688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.087722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.088015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.088048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.088177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.088208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.088383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.088415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.088517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.088549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.088729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.088761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.088875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.088906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.089048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.089080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.089200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.089231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.089338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.089369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.089534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.089576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.089744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.089775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.089966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.089998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.090182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.090214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.090378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.090427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.090604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.090637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.090740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.090772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.090940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.090971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.091077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.091108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.091213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.091245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.091348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.091378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.091607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.282 [2024-12-14 16:49:44.091641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.282 qpair failed and we were unable to recover it. 00:36:14.282 [2024-12-14 16:49:44.091761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.091793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.091963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.091994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.092180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.092212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.092327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.092360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.092584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.092617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.092739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.092770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.092895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.092928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.093037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.093068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.093233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.093263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.093441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.093472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.093703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.093736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.093913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.093944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.094110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.094141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.094250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.094282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.094446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.094477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.094647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.094679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.094790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.094821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.095025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.095056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.095257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.095288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.095480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.095511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.095653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.095686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.095862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.095894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.096002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.096034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.096162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.096194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.096368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.096398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.096511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.096543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.096723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.096756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.096876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.096907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.097164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.097195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.097374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.097406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.097663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.097697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.097821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.097852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.097960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.097990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.098221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.098290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.098453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.098524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.098733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.098769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.098982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.099014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.099140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.099171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.099280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.099312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.283 [2024-12-14 16:49:44.099479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.283 [2024-12-14 16:49:44.099511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.283 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.099638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.099672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.099847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.099878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.100005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.100037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.100136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.100166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.100340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.100372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.100483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.100513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.100716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.100758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.100999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.101031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.101162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.101193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.101416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.101447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.101661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.101695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.101800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.101831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.101936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.101968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.102156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.102188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.102369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.102400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.102587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.102620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.102742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.102774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.102888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.102919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.103200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.103231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.103352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.103383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.103572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.103604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.103720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.103751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.103921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.103952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.104125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.104156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.104323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.104353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.104456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.104487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.104658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.104691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.104864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.104895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.105064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.105097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.105265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.105297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.105409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.105440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.105543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.105587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.105756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.105788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.106020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.106063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.106238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.106272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.106379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.106410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.106521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.106552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.106752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.284 [2024-12-14 16:49:44.106784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.284 qpair failed and we were unable to recover it. 00:36:14.284 [2024-12-14 16:49:44.106956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.106987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.107168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.107199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.107366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.107397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.107577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.107611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.107794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.107834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.107945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.107976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.108169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.108200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.108316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.108347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.108454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.108486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.108608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.108641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.108817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.108847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.109035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.109064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.109179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.109209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.109314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.109345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.109536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.109579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.109693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.109726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.109848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.109879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.109983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.110017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.110138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.110170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.110287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.110320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.110491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.110524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.110817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.110890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.111115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.111151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.111266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.111298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.111491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.111525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.111771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.111814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.111991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.112025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.112216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.112250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.112421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.112454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.112599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.112633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.112828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.112860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.113027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.113059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.113226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.113264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.113378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.113409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.113658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.113691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.113871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.113911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.114108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.114139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.114258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.114291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.114575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.114608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.285 qpair failed and we were unable to recover it. 00:36:14.285 [2024-12-14 16:49:44.114733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.285 [2024-12-14 16:49:44.114765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.115025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.115058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.115184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.115216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.115387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.115418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.115531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.115571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.115744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.115778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.115898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.115929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.116042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.116075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.116240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.116271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.116441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.116472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.116713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.116746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.116850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.116881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.116995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.117026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.117208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.117240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.117352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.117385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.117595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.117629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.117795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.117826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.117996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.118028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.118198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.118229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.118341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.118375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.118547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.118589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.118764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.118797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.119013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.119044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.119218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.119251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.119363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.119396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.119498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.119527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.119784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.119854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.120073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.120113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.120352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.120385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.120512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.120545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.120732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.120765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.120948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.120980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.121085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.121117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.121287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.121320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.286 [2024-12-14 16:49:44.121594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.286 [2024-12-14 16:49:44.121627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.286 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.121801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.121835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.121955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.121987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.122186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.122219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.122389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.122422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.122523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.122565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.122769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.122802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.122980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.123012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.123125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.123159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.123267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.123300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.123402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.123434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.123603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.123638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.123741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.123774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.124013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.124045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.124236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.124268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.124439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.124472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.124677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.124717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.124896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.124929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.125095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.125128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.125241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.125273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.125443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.125475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.125596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.125630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.125738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.125770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.125944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.125977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.126156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.126189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.126388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.126421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.126591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.126625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.126776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.126812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.126998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.127032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.127203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.127237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.127417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.127450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.127646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.127680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.127804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.127838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.127958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.127990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.128164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.128196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.128391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.128424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.128528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.128574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.128771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.128805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.129009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.129054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.287 qpair failed and we were unable to recover it. 00:36:14.287 [2024-12-14 16:49:44.129224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.287 [2024-12-14 16:49:44.129257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.129520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.129553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.129766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.129800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.129918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.129950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.130129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.130166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.130278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.130312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.130498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.130529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.130658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.130690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.130807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.130839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.130944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.130976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.131219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.131251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.131372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.131405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.131513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.131546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.131765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.131798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.131976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.132007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.132174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.132207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.132372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.132403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.132590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.132625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.132805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.132838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.133020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.133051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.133174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.133207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.133376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.133409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.133634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.133668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.133859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.133891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.134073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.134105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.134227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.134258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.134431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.134463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.134585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.134617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.134730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.134762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.134937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.134970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.135139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.135171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.135372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.135409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.135601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.135634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.135824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.135857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.136116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.136148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.136249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.136281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.136405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.136437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.136548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.136601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.136710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.136742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.136914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.288 [2024-12-14 16:49:44.136946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.288 qpair failed and we were unable to recover it. 00:36:14.288 [2024-12-14 16:49:44.137126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.137158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.137333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.137363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.137641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.137675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.137922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.137954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.138121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.138153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.138327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.138360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.138576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.138610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.138710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.138742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.138952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.138983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.139092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.139124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.139295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.139326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.139447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.139478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.139610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.139642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.139862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.139894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.139995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.140027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.140144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.140176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.140299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.140331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.140443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.140475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.140583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.140621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.140836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.140868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.141039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.141071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.141240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.141273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.141376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.141408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.141600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.141634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.141746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.141778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.141878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.141910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.142079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.142110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.142308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.142341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.142513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.142546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.142673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.142706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.142880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.142912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.143028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.143059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.143224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.143297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.143427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.143463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.143584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.143620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.143729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.143763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.144029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.144061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.144230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.144262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.144379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.289 [2024-12-14 16:49:44.144411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.289 qpair failed and we were unable to recover it. 00:36:14.289 [2024-12-14 16:49:44.144678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.144712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.144819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.144851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.144950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.144982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.145093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.145125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.145244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.145277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.145386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.145418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.145617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.145660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.145774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.145807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.146070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.146102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.146339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.146372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.146547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.146591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.146696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.146728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.146839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.146872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.147042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.147074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.147179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.147211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.147313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.147345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.147463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.147496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.147784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.147817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.148085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.148118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.148382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.148415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.148529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.148572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.148740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.148772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.148953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.148986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.149085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.149117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.149285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.149317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.149418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.149448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.149644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.149678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.149847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.149879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.149980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.150012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.150123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.150155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.150269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.150302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.150405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.150436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.150625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.150658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.150885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.150957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.151179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.151215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.151388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.151422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.151543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.151597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.151799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.151832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.152032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.290 [2024-12-14 16:49:44.152065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.290 qpair failed and we were unable to recover it. 00:36:14.290 [2024-12-14 16:49:44.152170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.152203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.152370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.152411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.152581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.152617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.152813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.152847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.152974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.153006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.153122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.153154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.153303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.153337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.153451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.153492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.153663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.153696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.153800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.153833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.154076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.154108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.154234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.154267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.154449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.154483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.154601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.154635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.154806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.154839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.154961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.154994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.155112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.155144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.155313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.155345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.155518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.155549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.155761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.155794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.155901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.155933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.156044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.156076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.156264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.156296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.156429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.156462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.156651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.156693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.156799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.156863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.157051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.157085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.157265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.157297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.157437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.157471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.157592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.157626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.157842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.157876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.158036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.158070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.158190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.158223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.158324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.158356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.291 [2024-12-14 16:49:44.158543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.291 [2024-12-14 16:49:44.158591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.291 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.158710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.158742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.158855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.158888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.158997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.159030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.159216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.159248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.159373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.159406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.159509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.159541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.159654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.159688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.159923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.159957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.160085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.160117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.160232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.160264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.160453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.160486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.160593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.160625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.160791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.160830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.160955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.160987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.161102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.161135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.161238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.161271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.161379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.161411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.161531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.161591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.161694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.161726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.161896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.161928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.162123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.162156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.162342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.162374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.162576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.162610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.162729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.162762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.162864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.162896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.163011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.163042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.163157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.163190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.163446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.163478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.163604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.163638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.163831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.163864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.164046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.164078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.164183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.164216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.164344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.164376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.164543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.164588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.164838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.164870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.164994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.165026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.165135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.165168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.165284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.165316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.292 [2024-12-14 16:49:44.165422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.292 [2024-12-14 16:49:44.165454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.292 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.165634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.165707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.165902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.165938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.166069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.166102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.166223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.166256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.166370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.166403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.166509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.166540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.166677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.166713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.166888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.166919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.167033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.167066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.167234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.167266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.167450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.167483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.167584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.167619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.167730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.167763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.168032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.168065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.168257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.168291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.168405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.168438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.168548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.168589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.168693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.168726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.168831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.168863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.169049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.169081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.169192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.169223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.169337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.169370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.169488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.169520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.169713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.169747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.169933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.169965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.170077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.170110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.170212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.170245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.170444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.170477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.170648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.170682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.170813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.170846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.171105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.171137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.171255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.171287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.171403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.171435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.171542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.171581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.171757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.171790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.171996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.172029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.172208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.172241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.172410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.172442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.293 [2024-12-14 16:49:44.172680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.293 [2024-12-14 16:49:44.172714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.293 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.172814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.172847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.172958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.172996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.173182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.173214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.173319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.173351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.173469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.173501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.173701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.173734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.174006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.174039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.174209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.174241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.174350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.174382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.174487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.174521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.174637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.174671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.174787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.174820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.174996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.175029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.175197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.175229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.175422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.175455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.175704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.175739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.175940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.175973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.176146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.176178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.176292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.176324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.176427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.176459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.176575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.176608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.176774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.176806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.176928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.176960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.177155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.177188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.177295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.177328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.177504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.177536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.177703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.177743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.178014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.178047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.178222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.178256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.178427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.178459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.178576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.178610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.178727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.178759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.178926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.178958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.179060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.179092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.179268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.179300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.179464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.179496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.179672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.179706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.179886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.179918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.294 [2024-12-14 16:49:44.180031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.294 [2024-12-14 16:49:44.180062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.294 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.180164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.180195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.180365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.180397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.180509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.180548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.180683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.180717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.180823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.180856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.181040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.181073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.181268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.181300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.181470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.181502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.181629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.181662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.181830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.181864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.181976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.182008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.182176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.182209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.182377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.182410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.182513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.182546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.182730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.182762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.182968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.183000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.183197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.183230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.183413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.183446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.183627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.183660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.183774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.183806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.183918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.183950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.184057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.184089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.184268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.184300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.184404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.184437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.184607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.184640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.184759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.184791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.184896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.184929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.185118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.185150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.185334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.185366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.185610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.185645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.185906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.185939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.186058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.186090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.186262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.186295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.186408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.295 [2024-12-14 16:49:44.186440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.295 qpair failed and we were unable to recover it. 00:36:14.295 [2024-12-14 16:49:44.186543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.186584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.186706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.186738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.186857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.186890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.187060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.187092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.187277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.187309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.187494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.187527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.187722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.187755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.187922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.187954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.188133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.188171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.188380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.188412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.188582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.188616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.188805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.188838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.188964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.188997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.189173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.189205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.189401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.189434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.189621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.189655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.189761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.189793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.189926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.189958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.190130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.190163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.190294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.190326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.190616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.190652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.190765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.190797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.191056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.191089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.191278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.191310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.191530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.191573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.191759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.191792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.191897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.191929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.192036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.192069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.192253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.192285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.192546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.192588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.192693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.192726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.192952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.192984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.193175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.193207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.193337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.193370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.193493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.193525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.193648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.193683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.193788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.193822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.193988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.194020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.194222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.296 [2024-12-14 16:49:44.194255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.296 qpair failed and we were unable to recover it. 00:36:14.296 [2024-12-14 16:49:44.194435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.194467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.194720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.194755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.194940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.194972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.195171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.195204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.195392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.195425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.195622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.195655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.195850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.195882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.196019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.196052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.196161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.196193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.196306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.196344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.196513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.196545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.196720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.196753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.196856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.196889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.197132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.197164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.197349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.197382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.197493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.197525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.197781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.197814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.197927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.197960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.198085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.198118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.198222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.198254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.198490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.198523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.198728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.198763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.198890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.198923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.199048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.199081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.199188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.199220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.199328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.199360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.199474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.199507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.199623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.199657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.199772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.199805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.199985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.200018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.200206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.200237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.200354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.200386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.200494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.200527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.200753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.200826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.201066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.201137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.201275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.201310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.201517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.201552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.201795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.201828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.201942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.297 [2024-12-14 16:49:44.201975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.297 qpair failed and we were unable to recover it. 00:36:14.297 [2024-12-14 16:49:44.202079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.202110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.202238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.202270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.202469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.202503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.202637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.202671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.202836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.202870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.202981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.203012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.203181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.203214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.203330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.203362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.203534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.203577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.203748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.203780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.203898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.203937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.204119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.204150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.204271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.204310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.204548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.204597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.204784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.204816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.204986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.205018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.205126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.205158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.205282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.205315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.205554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.205599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.205715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.205747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.205922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.205954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.206106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.206139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.206308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.206340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.206595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.206631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.206753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.206785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.206990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.207022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.207147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.207179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.207301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.207334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.207455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.207486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.207687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.207721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.207855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.207888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.208077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.208109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.208278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.208311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.208447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.208481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.208591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.208626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.208865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.208898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.209076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.209109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.209379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.209411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.209671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.298 [2024-12-14 16:49:44.209704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.298 qpair failed and we were unable to recover it. 00:36:14.298 [2024-12-14 16:49:44.209923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.209955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.210191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.210224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.210338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.210370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.210490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.210530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.210749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.210783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.210957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.210990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.211227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.211260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.211372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.211404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.211590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.211624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.211743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.211776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.211884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.211916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.212026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.212065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.212235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.212268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.212385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.212418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.212586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.212627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.212816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.212850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.213021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.213053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.213223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.213255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.213426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.213458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.213639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.213671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.213861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.213893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.214008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.214040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.214230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.214261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.214384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.214417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.214526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.214566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.214775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.214814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.214935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.214967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.215073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.215107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.215276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.215308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.215428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.215461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.215679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.215714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.215914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.215945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.216060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.216093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.216278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.216310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.216502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.216534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.216660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.216693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.216814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.216847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.217089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.217138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.299 [2024-12-14 16:49:44.217331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.299 [2024-12-14 16:49:44.217364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.299 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.217483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.217516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.217712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.217746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.217919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.217953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.218076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.218110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.218300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.218334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.218530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.218585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.218707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.218739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.218973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.219006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.219240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.219273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.219398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.219430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.219542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.219594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.219806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.219838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.219967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.220006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.220127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.220160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.220296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.220344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.220529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.220575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.220745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.220777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.220904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.220937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.221062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.221096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.221299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.221335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.221460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.221492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.221668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.221704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.221888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.221921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.222088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.222121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.222322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.222355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.222467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.222500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.222713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.222746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.222858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.222892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.223019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.223052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.223167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.223201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.223380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.223411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.223532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.223587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.223709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.223741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.223956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.300 [2024-12-14 16:49:44.223989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.300 qpair failed and we were unable to recover it. 00:36:14.300 [2024-12-14 16:49:44.224166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.224199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.224391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.224423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.224548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.224590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.224714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.224747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.224972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.225005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.225215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.225249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.225417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.225450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.225635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.225670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.225798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.225829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.226001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.226033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.226228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.226260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.226437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.226469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.226591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.226625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.226808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.226841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.227104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.227142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.227254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.227295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.227467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.227500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.227620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.227653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.227838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.227876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.227988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.228021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.228140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.228173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.228382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.228414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.228520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.228553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.228759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.228793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.228998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.229030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.229147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.229179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.229323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.229364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.229538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.229580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.229704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.229737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.229856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.229889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.230025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.230058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.230261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.230294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.230538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.230581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.230779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.230812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.230928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.230961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.231127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.231160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.231270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.231304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.231502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.231576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.231776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.301 [2024-12-14 16:49:44.231810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.301 qpair failed and we were unable to recover it. 00:36:14.301 [2024-12-14 16:49:44.231916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.231949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.232072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.232104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.232218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.232252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.232370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.232402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.232515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.232549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.232672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.232708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.232834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.232868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.232979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.233012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.233115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.233165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.233346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.233380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.233493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.233526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.233674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.233720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.233974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.234007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.234130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.234163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.234333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.234366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.234609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.234645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.234822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.234856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.235101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.235133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.235255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.235288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.235479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.235519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.235684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.235720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.235910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.235942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.236133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.236168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.236274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.236307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.236494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.236527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.236694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.236767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.237030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.237065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.237196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.237230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.237339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.237372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.237488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.237519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.237648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.237683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.237852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.237885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.238069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.238102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.238294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.238328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.238584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.238619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.238797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.238829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.238948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.238981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.239162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.239193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.239430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.302 [2024-12-14 16:49:44.239463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.302 qpair failed and we were unable to recover it. 00:36:14.302 [2024-12-14 16:49:44.239576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.239610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.239798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.239831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.239954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.239986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.240156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.240188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.240292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.240325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.240493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.240525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.240648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.240683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.240869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.240907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.241035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.241068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.241243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.241276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.241394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.241426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.241526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.241573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.241695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.241729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.241941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.241973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.242163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.242196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.242389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.242421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.242550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.242596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.242770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.242802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.242904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.242936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.243053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.243086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.243190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.243230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.243408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.243440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.243622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.243657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.243789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.243822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.243931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.243962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.244076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.244108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.244279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.244312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.244491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.244522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.244711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.244747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.244870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.244902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.245034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.245067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.245190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.245223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.245473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.245505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.245619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.245653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.245830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.245863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.246033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.246066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.246167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.246200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.246312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.246344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.246461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.303 [2024-12-14 16:49:44.246494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.303 qpair failed and we were unable to recover it. 00:36:14.303 [2024-12-14 16:49:44.246616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.246650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.246777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.246810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.246990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.247023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.247195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.247227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.247332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.247364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.247573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.247606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.247825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.247858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.248099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.248132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.248337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.248371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.248498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.248530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.248651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.248685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.248884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.248917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.249106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.249139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.249333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.249366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.249532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.249574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.249698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.249730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.249996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.250030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.250217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.250249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.250365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.250397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.250510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.250543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.250731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.250764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.250967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.251005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.251111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.251144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.251265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.251297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.251502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.251534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.251727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.251761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.251866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.251899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.252003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.252036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.252246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.252279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.252464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.252496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.252609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.252643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.252758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.252791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.252906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.252938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.253128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.304 [2024-12-14 16:49:44.253160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.304 qpair failed and we were unable to recover it. 00:36:14.304 [2024-12-14 16:49:44.253330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.253363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.253491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.253524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.253653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.253686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.253799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.253833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.253937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.253969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.254093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.254126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.254317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.254349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.254517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.254548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.254666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.254698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.254825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.254857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.254977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.255010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.255191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.255224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.255489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.255522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.255723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.255758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.255885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.255917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.256051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.256084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.256263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.256296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.256466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.256498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.256628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.256660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.256897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.256929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.257051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.257083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.257284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.257316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.257424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.257456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.257636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.257671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.257853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.257884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.258068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.258101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.258204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.258236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.258428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.258466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.258578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.258612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.258817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.258850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.258980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.259012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.259186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.259219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.259407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.259440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.259613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.259647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.259846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.259878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.259986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.260019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.260245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.260278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.260465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.260497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.260619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.305 [2024-12-14 16:49:44.260653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.305 qpair failed and we were unable to recover it. 00:36:14.305 [2024-12-14 16:49:44.260825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.260857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.261022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.261055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.261167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.261200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.261307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.261340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.261448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.261481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.261715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.261750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.261969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.262003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.262195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.262227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.262405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.262437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.262538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.262580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.262753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.262785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.262965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.262998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.263170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.263203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.263315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.263347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.263471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.263504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.263713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.263752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.263924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.263956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.264126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.264159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.264329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.264360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.264530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.264571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.264765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.264799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.264987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.265020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.265187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.265220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.265394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.265426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.265624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.265658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.265849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.265881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.265999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.266031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.266145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.266177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.266361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.266393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.266579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.266613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.266807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.266840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.266953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.266985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.267088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.267120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.267364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.267396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.267505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.267538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.267767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.267800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.268043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.268076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.268182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.268214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.268328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.306 [2024-12-14 16:49:44.268360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.306 qpair failed and we were unable to recover it. 00:36:14.306 [2024-12-14 16:49:44.268501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.268533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.268659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.268692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.268867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.268900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.269101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.269134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.269272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.269304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.269418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.269450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.269622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.269655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.269835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.269868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.270108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.270140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.270318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.270350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.270541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.270582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.270752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.270785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.270898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.270931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.271171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.271203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.271370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.271402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.271642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.271675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.271845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.271883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.271997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.272029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.272150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.272182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.272290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.272321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.272501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.272533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.272753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.272785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.272971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.273003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.273113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.273145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.273337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.273369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.273655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.273689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.273985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.274018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.274131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.274163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.274353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.274385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.274523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.274555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.274745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.274777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.275011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.275044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.275164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.275197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.275366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.275398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.275589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.275623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.275870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.275903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.276079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.276111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.276223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.276257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.307 [2024-12-14 16:49:44.276440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.307 [2024-12-14 16:49:44.276472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.307 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.276640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.276675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.276777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.276810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.276983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.277015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.277132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.277165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.277292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.277325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.277434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.277466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.277586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.277620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.277730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.277762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.277949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.277981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.278150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.278182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.278363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.278396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.278659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.278693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.278806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.278838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.279018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.279050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.279235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.279268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.279385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.279418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.279683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.279717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.279886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.279924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.280106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.280139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.280318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.280350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.280466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.280499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.280610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.280644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.280823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.280855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.281044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.281076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.281267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.281300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.281400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.281433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.281568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.281602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.281869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.281903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.282092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.282125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.282314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.282347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.282449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.282482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.282614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.282649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.282840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.282872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.283005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.283038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.283214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.283246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.283417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.283450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.283686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.283720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.283980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.284013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.308 qpair failed and we were unable to recover it. 00:36:14.308 [2024-12-14 16:49:44.284116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.308 [2024-12-14 16:49:44.284148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.284262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.284295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.284408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.284441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.284568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.284602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.284715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.284747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.284934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.284967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.285146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.285180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.285296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.285329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.285496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.285530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.285716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.285749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.285923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.285956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.286123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.286155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.286342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.286374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.286577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.286611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.286791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.286824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.287011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.287043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.287223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.287255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.287432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.287465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.287602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.287637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.287743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.287786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.287957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.287989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.288159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.288192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.288308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.288341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.288505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.288538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.288740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.288773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.288882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.288914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.289110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.289143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.289245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.289277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.289448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.289480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.289620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.289653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.289760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.289793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.289967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.289998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.290115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.290147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.290267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.290298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.290504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.309 [2024-12-14 16:49:44.290536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.309 qpair failed and we were unable to recover it. 00:36:14.309 [2024-12-14 16:49:44.290810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.290843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.291011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.291043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.291143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.291175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.291390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.291423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.291621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.291655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.291861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.291893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.292007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.292039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.292209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.292240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.292344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.292376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.292574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.292607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.292861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.292893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.293105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.293138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.293319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.293351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.293466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.293498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.293624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.293657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.293846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.293878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.293994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.294026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.294199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.294231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.294413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.294445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.294564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.294596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.294716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.294749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.294930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.294961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.295128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.295160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.295281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.295313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.295488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.295525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.295730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.295765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.295942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.295974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.296076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.296108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.296300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.296333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.296451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.296482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.296606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.296640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.296826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.296859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.297030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.297063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.297230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.297263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.297382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.297415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.297530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.297574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.297771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.297804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.297925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.297958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.310 [2024-12-14 16:49:44.298078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.310 [2024-12-14 16:49:44.298112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.310 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.298222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.298254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.298450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.298483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.298593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.298626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.298822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.298854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.299043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.299076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.299193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.299226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.299427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.299461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.299630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.299664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.299769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.299802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.299944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.299977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.300157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.300189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.300360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.300393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.300578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.300614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.300797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.300829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.301008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.301041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.301142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.301174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.301410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.301442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.301618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.301652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.301832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.301865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.302034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.302066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.302235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.302268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.302437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.302469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.302601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.302633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.302810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.302842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.303014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.303046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.303163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.303200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.303388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.303420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.303610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.303643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.303817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.303850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.304027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.304059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.304165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.304198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.304382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.304414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.304607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.304640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.304814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.304847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.304957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.304988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.305171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.305203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.305370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.305404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.305665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.305699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.305874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.305906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.306023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.306056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.311 [2024-12-14 16:49:44.306170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.311 [2024-12-14 16:49:44.306202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.311 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.306405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.306437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.306613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.306646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.306814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.306846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.306962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.306994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.307155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.307188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.307309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.307341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.307528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.307579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.307700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.307732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.307844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.307876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.308093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.308126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.308237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.308269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.308393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.308425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.308544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.308586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.308780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.308813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.308927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.308959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.309176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.309208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.309323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.309356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.309472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.309504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.309680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.309713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.309907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.309940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.310136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.310169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.310285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.310317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.310484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.310516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.310647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.310680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.310798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.310836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.311020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.311054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.311223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.311256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.311377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.311409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.311596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.311631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.311803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.311836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.312009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.312042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.312148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.312180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.312362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.312395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.312513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.312545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.312733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.312766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.313011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.313044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.313166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.313198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.313460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.313493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.313774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.313807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.314000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.314033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.314146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.314179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.312 [2024-12-14 16:49:44.314373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.312 [2024-12-14 16:49:44.314405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.312 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.314520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.314553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.314752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.314784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.314905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.314938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.315105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.315137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.315305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.315337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.315602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.315637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.315775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.315807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.315917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.315950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.316117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.316150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.316265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.316298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.316473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.316506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.316621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.316655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.316843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.316876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.317134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.317167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.317281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.317314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.317481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.317513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.317655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.317689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.317809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.317842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.318014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.318046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.318223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.318256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.318512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.318544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.318749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.318782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.318895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.318932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.319118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.319151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.319322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.319355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.319470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.319502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.319706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.319740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.319842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.319874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.320111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.320143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.320380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.320412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.320593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.320628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.320801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.320834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.321109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.321142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.321255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.321288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.321457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.321490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.313 [2024-12-14 16:49:44.321683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.313 [2024-12-14 16:49:44.321718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.313 qpair failed and we were unable to recover it. 00:36:14.314 [2024-12-14 16:49:44.321921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.314 [2024-12-14 16:49:44.321954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.314 qpair failed and we were unable to recover it. 00:36:14.314 [2024-12-14 16:49:44.322071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.314 [2024-12-14 16:49:44.322104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.314 qpair failed and we were unable to recover it. 00:36:14.314 [2024-12-14 16:49:44.322218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.314 [2024-12-14 16:49:44.322251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.314 qpair failed and we were unable to recover it. 00:36:14.314 [2024-12-14 16:49:44.322374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.314 [2024-12-14 16:49:44.322407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.314 qpair failed and we were unable to recover it. 00:36:14.314 [2024-12-14 16:49:44.322512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.314 [2024-12-14 16:49:44.322545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.314 qpair failed and we were unable to recover it. 00:36:14.314 [2024-12-14 16:49:44.322738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.314 [2024-12-14 16:49:44.322771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.314 qpair failed and we were unable to recover it. 00:36:14.314 [2024-12-14 16:49:44.322961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.314 [2024-12-14 16:49:44.322993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.314 qpair failed and we were unable to recover it. 00:36:14.314 [2024-12-14 16:49:44.323110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.314 [2024-12-14 16:49:44.323143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.314 qpair failed and we were unable to recover it. 00:36:14.314 [2024-12-14 16:49:44.323271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.314 [2024-12-14 16:49:44.323302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.314 qpair failed and we were unable to recover it. 00:36:14.314 [2024-12-14 16:49:44.323405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.323437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.323539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.323579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.323820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.323853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.323956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.323988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.324181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.324214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.324331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.324364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.324545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.324589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.324869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.324901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.325034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.325066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.325184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.325216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.325353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.325387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.325576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.325610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.325718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.325750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.325920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.325952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.326147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.326179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.326383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.326416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.326654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.326688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.326822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.326865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.326988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.327021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.327154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.327187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.327405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.327438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.327544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.315 [2024-12-14 16:49:44.327606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.315 qpair failed and we were unable to recover it. 00:36:14.315 [2024-12-14 16:49:44.327714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.594 [2024-12-14 16:49:44.327747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.594 qpair failed and we were unable to recover it. 00:36:14.594 [2024-12-14 16:49:44.327933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.594 [2024-12-14 16:49:44.327965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.594 qpair failed and we were unable to recover it. 00:36:14.594 [2024-12-14 16:49:44.328081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.594 [2024-12-14 16:49:44.328113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.594 qpair failed and we were unable to recover it. 00:36:14.594 [2024-12-14 16:49:44.328282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.594 [2024-12-14 16:49:44.328315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.594 qpair failed and we were unable to recover it. 00:36:14.594 [2024-12-14 16:49:44.328511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.594 [2024-12-14 16:49:44.328544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.594 qpair failed and we were unable to recover it. 00:36:14.594 [2024-12-14 16:49:44.328722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.594 [2024-12-14 16:49:44.328756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.594 qpair failed and we were unable to recover it. 00:36:14.594 [2024-12-14 16:49:44.328945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.594 [2024-12-14 16:49:44.328978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.594 qpair failed and we were unable to recover it. 00:36:14.594 [2024-12-14 16:49:44.329170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.594 [2024-12-14 16:49:44.329203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.594 qpair failed and we were unable to recover it. 00:36:14.594 [2024-12-14 16:49:44.329317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.594 [2024-12-14 16:49:44.329350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.594 qpair failed and we were unable to recover it. 00:36:14.594 [2024-12-14 16:49:44.329549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.594 [2024-12-14 16:49:44.329595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.594 qpair failed and we were unable to recover it. 00:36:14.594 [2024-12-14 16:49:44.329711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.594 [2024-12-14 16:49:44.329745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.329946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.329979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.330099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.330132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.330330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.330363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.330474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.330507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.330787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.330821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.331017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.331050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.331240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.331273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.331454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.331486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.331660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.331694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.331930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.331963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.332151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.332183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.332431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.332502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.332652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.332689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.332804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.332837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.332966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.332998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.333110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.333143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.333264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.333297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.333491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.333522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.333644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.333678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.333800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.333831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.333947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.333979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.334097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.334129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.334250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.334282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.334472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.334504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.334623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.334657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.334840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.334872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.335001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.335033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.335200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.335232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.335425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.335457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.335713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.335747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.335851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.335883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.336077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.336109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.336377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.336409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.336525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.336576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.336769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.336802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.336981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.337013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.337134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.337166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.337350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.337382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.337488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.337526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.337654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.337687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.337855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.337888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.338068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.338100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.338284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.338315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.338494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.338527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.338737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.338775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.338994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.339026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.339268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.339301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.339484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.339516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.339805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.339839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.340025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.340057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.340227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.340258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.340449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.340482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.340696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.340731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.340905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.340936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.341178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.341210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.341324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.341356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.341524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.341564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.595 [2024-12-14 16:49:44.341730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.595 [2024-12-14 16:49:44.341763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.595 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.342017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.342049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.342158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.342190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.342303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.342336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.342545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.342585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.342866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.342898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.343028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.343059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.343182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.343215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.343335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.343371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.343572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.343607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.343712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.343744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.343854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.343887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.343992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.344024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.344195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.344226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.344442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.344475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.344596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.344631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.344815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.344847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.345036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.345068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.345246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.345277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.345393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.345426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.345607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.345641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.345822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.345853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.345978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.346010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.346139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.346171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.346375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.346409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.346528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.346570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.346821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.346854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.347107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.347139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.347326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.347359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.347616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.347651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.347842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.347875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.347993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.348025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.348258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.348290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.348501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.348533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.348810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.348843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.349024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.349061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.349319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.349352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.349547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.349591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.349760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.349792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.349901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.349933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.350201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.350235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.350425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.350457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.350642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.350677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.350854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.350886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.351169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.351202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.351456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.351488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.351814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.351848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.352093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.352126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.352306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.352339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.352516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.352549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.352825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.352857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.353039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.353072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.353275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.353307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.353574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.353607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.353881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.353913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.354128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.354161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.354275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.354308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.354541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.354584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.354834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.354866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.596 [2024-12-14 16:49:44.355038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.596 [2024-12-14 16:49:44.355070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.596 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.355261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.355293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.355521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.355553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.355779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.355811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.355999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.356031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.356271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.356304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.356496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.356528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.356745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.356779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.356907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.356940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.357051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.357082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.357201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.357234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.357354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.357385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.357661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.357697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.357886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.357918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.358085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.358117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.358307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.358340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.358520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.358553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.358770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.358804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.358983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.359016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.359263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.359295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.359486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.359518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.359766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.359800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.359997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.360029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.360155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.360186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.360296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.360327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.360442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.360473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.360642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.360676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.360913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.360946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.361206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.361239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.361473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.361505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.361689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.361724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.361912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.361945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.362130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.362162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.362339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.362372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.362626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.362659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.362849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.362880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.363050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.363082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.363355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.363387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.363501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.363532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.363748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.363782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.363980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.364013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.364195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.364227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.364399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.364432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.364617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.364650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.364819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.364857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.365094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.365126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.365342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.365374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.365480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.365513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.365759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.365793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.365962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.365994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.366180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.366212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.366460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.597 [2024-12-14 16:49:44.366492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.597 qpair failed and we were unable to recover it. 00:36:14.597 [2024-12-14 16:49:44.366687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.366720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.366911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.366942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.367044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.367076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.367245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.367277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.367513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.367545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.367742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.367775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.368048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.368081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.368280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.368313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.368501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.368532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.368789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.368822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.368998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.369029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.369142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.369173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.369360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.369393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.369569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.369603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.369852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.369884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.370008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.370040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.370283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.370315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.370507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.370539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.370723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.370755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.371015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.371049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.371245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.371278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.371551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.371609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.371717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.371749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.371935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.371967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.372174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.372207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.372375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.372407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.372604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.372638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.372865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.372897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.373163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.373195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.373475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.373508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.373696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.373730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.373900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.373932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.374098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.374131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.374375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.374408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.374602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.374636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.374772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.374804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.374979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.375012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.375247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.375280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.375452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.375484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.375744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.375778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.376068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.376100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.376268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.376300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.376406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.376439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.376688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.376722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.376998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.377032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.377315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.377347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.377537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.377578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.377824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.377857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.377961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.377993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.378165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.378198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.378444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.378476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.378729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.378763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.378944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.378977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.379147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.379180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.379362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.379395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.379644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.379678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.379956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.379989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.380188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.380221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.380391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.380423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.598 qpair failed and we were unable to recover it. 00:36:14.598 [2024-12-14 16:49:44.380690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.598 [2024-12-14 16:49:44.380725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.380915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.380954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.381151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.381184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.381394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.381427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.381619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.381653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.381825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.381858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.382101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.382135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.382303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.382336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.382576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.382610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.382869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.382902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.383106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.383138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.383320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.383352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.383615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.383649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.383818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.383851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.384088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.384120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.384387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.384420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.384628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.384663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.384841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.384873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.385068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.385100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.385292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.385325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.385496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.385528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.385807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.385841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.386076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.386108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.386279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.386313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.386602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.386636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.386840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.386873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.386990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.387023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.387211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.387244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.387494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.387526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.387684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.387717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.387886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.387917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.388024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.388056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.388224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.388257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.388444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.388476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.388716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.388750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.389018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.389050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.389221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.389252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.389488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.389520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.389698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.389731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.389854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.389885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.390146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.390179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.390373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.390405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.390575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.390615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.390786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.390819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.391082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.391114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.391352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.391384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.391592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.391625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.391889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.391921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.392213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.392246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.392436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.392468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.392640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.392674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.392935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.392969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.393136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.393168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.393350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.393382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.393495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.393527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.393742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.393775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.393972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.394005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.394265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.394298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.394537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.394578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.599 qpair failed and we were unable to recover it. 00:36:14.599 [2024-12-14 16:49:44.394867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.599 [2024-12-14 16:49:44.394901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.395114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.395148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.395415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.395448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.395756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.395790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.395959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.395992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.396252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.396285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.396455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.396488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.396655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.396690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.396956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.396988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.397232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.397265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.397445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.397484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.397681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.397714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.397837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.397869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.397985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.398018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.398136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.398168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.398336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.398368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.398539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.398582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.398884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.398917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.399200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.399233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.399509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.399542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.399866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.399900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.400164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.400196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.400493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.400526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.400788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.400821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.401112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.401145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.401382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.401414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.401585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.401620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.401893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.401926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.402211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.402243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.402410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.402443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.402704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.402738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.402909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.402941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.403150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.403184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.403351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.403384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.403497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.403529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.403807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.403841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.404110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.404143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.404329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.404361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.404622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.404657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.404894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.404927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.405216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.405248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.405362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.405394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.405658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.405693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.405885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.405917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.406155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.406188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.406302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.406334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.406505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.406539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.406758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.406791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.407041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.407074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.407257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.407290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.407399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.407430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.407610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.407666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.407930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.600 [2024-12-14 16:49:44.407962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.600 qpair failed and we were unable to recover it. 00:36:14.600 [2024-12-14 16:49:44.408083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.408116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.408380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.408414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.408706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.408740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.408927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.408959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.409092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.409126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.409321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.409354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.409613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.409647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.409830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.409863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.410031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.410063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.410316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.410348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.410636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.410669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.410857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.410890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.411145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.411178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.411360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.411392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.411655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.411689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.411858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.411890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.412087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.412119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.412308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.412340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.412468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.412501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.412775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.412810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.413023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.413054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.413174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.413206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.413399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.413431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.413605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.413641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.413986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.414022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.414381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.414423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.414620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.414656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.414826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.414859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.415118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.415151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.415439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.415472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.415745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.415780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.416046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.416078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.416198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.416230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.416492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.416525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.416787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.416822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.416951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.416984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.417154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.417185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.417392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.417424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.417664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.417697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.417889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.417923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.418130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.418163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.418332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.418364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.418645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.418680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.418873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.418904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.419161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.419196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.419380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.419414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.419698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.419733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.420045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.420078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.420326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.420359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.420529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.420573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.420835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.420868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.421153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.421186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.421461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.421494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.421821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.421855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.422043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.422077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.422247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.422280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.422549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.422594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.422804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.422838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.601 [2024-12-14 16:49:44.423027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.601 [2024-12-14 16:49:44.423059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.601 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.423232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.423265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.423536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.423589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.423879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.423913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.424107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.424139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.424379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.424411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.424524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.424570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.424752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.424786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.424976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.425013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.425287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.425321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.425425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.425457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.425626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.425659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.425779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.425811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.426077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.426112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.426228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.426262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.426454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.426487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.426702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.426737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.426843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.426875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.427045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.427079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.427275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.427308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.427495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.427528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.427726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.427760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.428005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.428038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.428206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.428238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.428497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.428530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.428765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.428799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.429038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.429070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.429257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.429289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.429531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.429575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.429834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.429867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.430048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.430081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.430252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.430283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.430479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.430512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.430792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.430826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.431106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.431139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.431419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.431457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.431650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.431685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.431951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.431983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.432186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.432219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.432537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.432580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.432863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.432896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.433089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.433122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.433390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.433424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.433595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.433629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.433897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.433930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.434129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.434163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.434409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.434443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.434563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.434598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.434816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.434850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.435035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.435069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.435184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.435216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.435481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.435515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.435647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.435681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.435945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.435978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.436171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.436204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.436349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.436382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.436594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.436628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.436805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.436837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.437010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.437042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.437222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.602 [2024-12-14 16:49:44.437255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.602 qpair failed and we were unable to recover it. 00:36:14.602 [2024-12-14 16:49:44.437535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.437587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.437847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.437879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.438138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.438171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.438347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.438379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.438553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.438596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.438767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.438799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.438907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.438940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.439162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.439195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.439389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.439423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.439607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.439640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.439904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.439938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.440132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.440164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.440425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.440459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.440713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.440748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.440992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.441024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.441221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.441253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.441422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.441460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.441633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.441666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.441881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.441916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.442039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.442072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.442335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.442368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.442610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.442645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.442822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.442855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.442977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.443009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.443191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.443222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.443408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.443441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.443685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.443721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.443894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.443927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.444100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.444134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.444343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.444377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.444653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.444688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.444865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.444899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.445087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.445121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.445235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.445267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.445446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.445479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.445751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.445785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.445934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.445968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.446145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.446178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.446354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.446387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.446664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.446698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.446972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.447005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.447276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.447310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.447517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.447551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.447739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.447771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.447977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.448009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.448181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.448213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.448454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.448485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.448670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.448704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.448973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.449005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.449178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.449211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.449454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.449489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.449705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.449738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.449910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.449942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.450235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.450269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.603 [2024-12-14 16:49:44.450519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.603 [2024-12-14 16:49:44.450552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.603 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.450858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.450890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.451146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.451178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.451356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.451389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.451654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.451689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.451943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.451976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.452111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.452144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.452430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.452463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.452709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.452744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.452947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.452979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.453106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.453139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.453356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.453390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.453494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.453527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.453719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.453754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.453957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.453990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.454269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.454303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.454513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.454546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.454811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.454845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.455052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.455085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.455352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.455385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.455655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.455690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.455979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.456012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.456229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.456262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.456391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.456424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.456609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.456644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.456918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.456950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.457211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.457244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.457493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.457527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.457800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.457832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.458014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.458047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.458217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.458256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.458465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.458498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.458694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.458728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.458899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.458930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.459102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.459134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.459401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.459434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.459704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.459739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.459956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.459989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.460204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.460238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.460507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.460541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.460829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.460863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.461039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.461072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.461245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.461278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.461455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.461487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.461776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.461812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.462083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.462116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.462305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.462338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.462533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.462577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.462754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.462787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.463042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.463074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.463247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.463280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.463453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.463484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.463685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.463718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.463904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.463937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.464186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.464219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.464493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.464526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.464845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.464879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.465145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.465179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.465470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.465503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.465774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.604 [2024-12-14 16:49:44.465807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.604 qpair failed and we were unable to recover it. 00:36:14.604 [2024-12-14 16:49:44.466035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.466070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.466290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.466323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.466601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.466633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.466838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.466871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.467047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.467080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.467280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.467312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.467483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.467517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.467706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.467739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.468046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.468079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.468254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.468307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.468483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.468516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.468772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.468813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.469061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.469095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.469279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.469312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.469503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.469535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.469677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.469710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.469995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.470029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.470278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.470312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.470501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.470534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.470678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.470711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.470910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.470943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.471120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.471152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.471383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.471416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.471678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.471713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.471927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.471960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.472185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.472220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.472489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.472524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.472739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.472772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.473068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.473102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.473279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.473313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.473429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.473463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.473638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.473674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.473948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.473982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.474247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.474281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.474402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.474436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.474634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.474669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.474864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.474897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.475079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.475113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.475386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.475427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.475571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.475607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.475874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.475906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.476136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.476170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.476386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.476421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.476534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.476580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.476700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.476732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.476861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.476895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.477104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.477137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.477342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.477376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.477586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.477620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.477825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.477859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.477977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.478009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.478208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.478242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.478520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.478554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.478714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.478748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.478887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.478919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.479123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.479156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.479405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.479439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.479655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.479690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.479964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.605 [2024-12-14 16:49:44.479998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.605 qpair failed and we were unable to recover it. 00:36:14.605 [2024-12-14 16:49:44.480218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.480250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.480527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.480573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.480776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.480811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.481013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.481047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.481232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.481266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.481463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.481497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.481759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.481792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.482095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.482129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.482391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.482425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.482644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.482679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.482963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.482995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.483198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.483232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.483414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.483449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.483577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.483612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.483725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.483760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.484051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.484084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.484276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.484310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.484581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.484617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.484817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.484851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.484980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.485015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.485200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.485244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.485445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.485478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.485730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.485765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.485962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.485998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.486178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.486212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.486414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.486449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.486714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.486750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.486873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.486907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.487110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.487144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.487428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.487461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.487672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.487708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.487833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.487866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.487997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.488031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.488225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.488260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.488393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.488428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.488661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.488695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.488830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.488864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.489117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.489151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.489335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.489369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.489668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.489703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.489966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.489999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.490184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.490218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.490422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.490455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.490643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.490678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.490861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.490894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.491175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.491208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.491473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.491508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.491643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.491684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.491958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.491992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.492249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.492284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.492589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.492625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.492885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.492918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.493146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.493180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.606 qpair failed and we were unable to recover it. 00:36:14.606 [2024-12-14 16:49:44.493374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.606 [2024-12-14 16:49:44.493408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.493722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.493756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.493937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.493971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.494223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.494257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.494445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.494479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.494681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.494716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.494902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.494935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.495223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.495258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.495393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.495427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.495701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.495737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.495959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.495992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.496110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.496144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.496397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.496430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.496551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.496598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.496721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.496754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.496879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.496912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.497090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.497124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.497423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.497457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.497576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.497611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.497905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.497939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.498194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.498229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.498459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.498494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.498822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.498859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.499043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.499076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.499259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.499292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.499470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.499503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.499701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.499736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.500032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.500067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.500198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.500231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.500415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.500450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.500772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.500807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.501081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.501113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.501318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.501353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.501546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.501598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.501778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.501812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.502074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.502115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.502414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.502447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.502700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.502735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.502869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.502903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.503109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.503144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.503328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.503363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.503644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.503680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.503887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.503921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.504052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.504086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.504283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.504316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.504517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.504552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.504705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.504738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.504951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.504985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.505117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.505152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.505296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.505331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.505513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.505546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.505773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.505807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.505992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.506026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.506208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.506241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.506505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.506540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.506759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.506794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.506998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.507032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.507237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.507270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.607 [2024-12-14 16:49:44.507455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.607 [2024-12-14 16:49:44.507488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.607 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.507692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.507727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.507998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.508032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.508321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.508354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.508657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.508698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.508899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.508932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.509119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.509152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.509330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.509362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.509639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.509674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.509889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.509923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.510148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.510183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.510368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.510403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.510514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.510548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.510780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.510813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.510940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.510974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.511248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.511282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.511489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.511521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.511737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.511772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.511988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.512022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.512273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.512307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.512588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.512624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.512816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.512849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.512971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.513004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.513273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.513306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.513421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.513455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.513737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.513771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.513922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.513956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.514093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.514128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.514323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.514357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.514656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.514690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.514816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.514851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.515056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.515090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.515231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.515265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.515389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.515422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.515605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.515641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.515826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.515860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.516136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.516169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.516423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.516458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.516653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.516687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.516881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.516912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.517184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.517218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.517418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.517452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.517702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.517736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.518011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.518045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.518330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.518364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.518599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.518640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.518918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.518953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.519216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.519250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.519517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.519550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.519746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.519779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.519912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.519945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.520221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.520255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.520519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.520552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.520850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.520885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.521010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.521043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.521264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.521298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.521500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.521533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.521825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.521859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.522135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.522168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.522451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.608 [2024-12-14 16:49:44.522486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.608 qpair failed and we were unable to recover it. 00:36:14.608 [2024-12-14 16:49:44.522762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.522797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.522981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.523014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.523233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.523267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.523396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.523430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.523643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.523678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.523950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.523983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.524176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.524209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.524411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.524444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.524577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.524612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.524800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.524835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.525035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.525070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.525184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.525218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.525493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.525527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.525749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.525784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.526037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.526070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.526287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.526321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.526507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.526541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.526831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.526865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.527145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.527180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.527291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.527325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.527471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.527506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.527635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.527669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.527943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.527978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.528240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.528274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.528460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.528495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.528783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.528818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.529025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.529059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.529250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.529284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.529462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.529496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.529781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.529815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.529944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.529977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.530251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.530284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.530462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.530495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.530707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.530741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.531018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.531050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.531334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.531367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.531584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.531618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.531846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.531880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.532130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.532165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.532389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.532424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.532544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.532592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.532802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.532834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.532943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.532976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.533093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.533127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.533327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.533361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.533623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.533659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.533892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.533925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.534108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.534141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.534253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.534287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.534422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.534455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.534662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.534697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.534906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.534941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.535125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.535159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.535340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.609 [2024-12-14 16:49:44.535383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.609 qpair failed and we were unable to recover it. 00:36:14.609 [2024-12-14 16:49:44.535636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.535671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.535856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.535889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.536168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.536201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.536403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.536436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.536733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.536768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.536979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.537013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.537286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.537319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.537609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.537645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.537830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.537863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.538063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.538096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.538314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.538347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.538474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.538507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.538833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.538868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.539173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.539207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.539403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.539435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.539637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.539671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.539816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.539849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.540052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.540086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.540282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.540316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.540428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.540461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.540650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.540684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.540794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.540829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.541014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.541049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.541159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.541192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.541464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.541498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.541766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.541800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.542022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.542055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.542263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.542297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.542424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.542459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.542670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.542705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.542911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.542944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.543062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.543096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.543280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.543313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.543497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.543532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.543753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.543788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.543996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.544029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.544156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.544191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.544384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.544417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.544625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.544661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.544852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.544885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.545094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.545129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.545306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.545340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.545546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.545591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.545777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.545810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.546074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.546107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.546289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.546323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.546523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.546571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.546684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.546718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.546847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.546880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.547154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.547187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.547424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.547458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.547733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.547768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.547951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.547984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.548104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.548139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.548325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.548359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.548540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.548586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.548708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.548741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.548958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.548992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.549192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.610 [2024-12-14 16:49:44.549226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.610 qpair failed and we were unable to recover it. 00:36:14.610 [2024-12-14 16:49:44.549504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.549538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.549669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.549703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.549964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.549999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.550267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.550301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.550503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.550537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.550749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.550784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.551011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.551045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.551343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.551377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.551640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.551682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.551806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.551840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.552043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.552076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.552263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.552297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.552578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.552612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.552826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.552860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.553133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.553168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.553295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.553329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.553623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.553659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.553858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.553893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.554072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.554105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.554310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.554344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.554524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.554566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.554759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.554792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.555078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.555112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.555373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.555406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.555613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.555649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.555852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.555885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.556064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.556096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.556349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.556381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.556576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.556610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.556887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.556921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.557210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.557243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.557446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.557479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.557755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.557790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.557996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.558030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.558226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.558260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.558547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.558591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.558791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.558825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.559025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.559059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.559332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.559366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.559601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.559637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.559853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.559886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.560083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.560118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.560252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.560284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.560466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.560499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.560714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.560748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.560953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.560986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.561107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.561139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.561266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.561299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.561483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.561519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.561740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.561781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.561995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.562028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.562207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.562239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.562357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.562392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.562597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.562632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.562885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.562917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.563116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.563150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.563406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.563440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.563621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.563656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.563839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.563872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.611 [2024-12-14 16:49:44.564055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.611 [2024-12-14 16:49:44.564088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.611 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.564231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.564264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.564452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.564484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.564806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.564843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.564970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.565004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.565189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.565221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.565329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.565363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.565566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.565602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.565733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.565766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.566069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.566104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.566230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.566265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.566469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.566504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.566717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.566750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.567022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.567056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.567244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.567279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.567406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.567439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.567620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.567654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.567833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.567873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.568074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.568107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.568322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.568356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.568533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.568576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.568698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.568730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.568863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.568897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.569164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.569201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.569398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.569432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.569640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.569675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.569973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.570007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.570134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.570168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.570376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.570411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.570605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.570639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.570845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.570878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.571065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.571099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.571290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.571323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.571507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.571540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.571687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.571721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.571925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.571961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.572158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.572192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.572400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.572436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.572552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.572599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.572806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.572840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.573049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.573084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.573331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.573365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.573626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.573661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.573923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.573957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.574087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.574120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.574342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.574377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.574580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.574616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.574868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.574901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.575087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.575121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.575246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.575279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.575471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.575504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.575718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.575752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.576032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.576067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.576195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.576228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.612 [2024-12-14 16:49:44.576479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.612 [2024-12-14 16:49:44.576512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.612 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.576644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.576681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.576954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.576987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.577096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.577128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.577416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.577458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.577728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.577763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.577991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.578024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.578137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.578170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.578361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.578396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.578655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.578689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.578809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.578841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.579019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.579052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.579282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.579318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.579603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.579637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.579822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.579856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.580130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.580164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.580460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.580492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.580700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.580734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.580949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.580983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.581180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.581214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.581438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.581471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.581768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.581804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.582070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.582103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.582214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.582248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.582442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.582476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.582733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.582769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.582896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.582929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.583186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.583220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.583429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.583462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.583752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.583788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.583986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.584020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.584227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.584272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.584383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.584416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.584681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.584716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.584914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.584948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.585070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.585104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.585234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.585266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.585570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.585605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.585782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.585814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.586021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.586054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.586245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.586278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.586399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.586431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.586692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.586728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.586924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.586956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.587173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.587207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.587456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.587491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.587675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.587710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.587887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.587921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.588116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.588149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.588266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.588298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.588484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.588519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.588734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.588770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.588949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.588982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.589185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.589219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.589473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.589508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.589742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.589776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.589965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.589999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.590122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.590155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.590337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.590371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.613 [2024-12-14 16:49:44.590554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.613 [2024-12-14 16:49:44.590600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.613 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.590878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.590913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.591025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.591058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.591252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.591287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.591476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.591509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.591735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.591771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.591978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.592012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.592206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.592240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.592422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.592455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.592735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.592771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.593053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.593087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.593288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.593321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.593588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.593644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.593906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.593947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.594221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.594254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.594542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.594588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.594868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.594902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.595176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.595209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.595500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.595534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.595756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.595791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.596063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.596097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.596301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.596334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.596534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.596581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.596786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.596820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.596942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.596974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.597114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.597146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.597420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.597453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.597677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.597713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.597892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.597925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.598108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.598142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.598335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.598369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.598477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.598510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.598722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.598760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.598984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.599018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.599219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.599254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.599452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.599486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.599674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.599710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.599896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.599930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.600132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.600166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.600301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.600334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.600527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.600579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.600851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.600885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.601071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.601105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.601289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.601322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.601520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.601555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.601779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.601812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.602064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.602097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.602222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.602255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.602442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.602476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.602687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.602724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.602933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.602967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.603163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.603196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.603493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.603528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.603823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.603858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.604159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.604194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.604419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.604453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.604647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.604685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.604964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.604999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.605125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.605158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.614 [2024-12-14 16:49:44.605271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.614 [2024-12-14 16:49:44.605303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.614 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.605527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.605569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.605752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.605786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.605914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.605947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.606163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.606197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.606410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.606443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.606719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.606754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.606939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.606973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.607177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.607213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.607405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.607439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.607741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.607776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.607887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.607921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.608035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.608069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.608181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.608215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.608390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.608423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.608638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.608672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.608881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.608914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.609165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.609201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.609383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.609417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.609672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.609708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.609892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.609925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.610107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.610141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.610253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.610291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.610580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.610616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.610854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.610888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.611185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.611221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.611332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.611367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.611496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.611530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.611835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.611871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.612009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.612042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.612244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.612276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.612587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.612624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.612735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.612770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.613022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.613055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.613240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.613275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.613458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.613491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.613716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.613751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.613874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.613907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.614091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.614126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.614265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.614299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.614502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.614537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.614855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.614890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.615128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.615161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.615346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.615380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.615602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.615638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.615769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.615805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.615920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.615954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.616136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.616172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.616360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.616394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.616591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.616626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.616752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.616785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.616964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.616997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.617263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.617297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.617475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.617509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.617630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.615 [2024-12-14 16:49:44.617664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.615 qpair failed and we were unable to recover it. 00:36:14.615 [2024-12-14 16:49:44.617842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.617875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.618150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.618185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.618381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.618415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.618617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.618651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.618840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.618874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.619179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.619213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.619397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.619431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.619683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.619718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.619905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.619939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.620066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.620099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.620295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.620329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.620601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.620637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.620940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.620975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.621187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.621221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.621406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.621439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.621720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.621755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.621880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.621915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.622190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.622224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.622496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.622530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.622721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.622756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.623013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.623047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.623169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.623204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.623487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.623522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.623723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.623759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.623885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.623919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.624049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.624084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.624193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.624225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.624428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.624460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.624580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.624615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.624819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.624854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.625032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.625066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.625352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.625386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.625578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.625614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.625745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.625778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.625981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.626014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.626207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.626247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.626433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.626467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.626653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.626687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.626871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.626906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.627110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.627144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.627395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.627428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.627705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.627740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.627852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.627885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.628073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.628108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.628234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.628267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.628478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.628513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.628718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.628753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.628960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.628995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.629192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.629226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.629485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.629519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.629729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.629765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.629894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.629928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.630199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.630233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.630440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.630474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.630662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.630697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.630878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.630912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.631022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.631057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.631346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.631380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.631631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.616 [2024-12-14 16:49:44.631666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.616 qpair failed and we were unable to recover it. 00:36:14.616 [2024-12-14 16:49:44.631880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.631914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.632115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.632149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.632337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.632370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.632583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.632619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.632839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.632872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.633053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.633088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.633275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.633307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.633509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.633542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.633831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.633866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.633978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.634011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.634279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.634312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.634420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.634454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.634575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.634610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.634826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.634860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.635062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.635095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.635310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.635344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.635523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.635582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.635841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.635882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.636155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.636190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.636469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.636502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.636721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.636757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.637055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.637088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.637363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.637397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.637602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.637638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.637838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.637872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.638127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.638162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.638362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.638396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.638523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.638580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.638867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.638902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.639171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.639205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.639386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.639418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.639626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.639661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.639847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.639880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.640064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.640098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.640299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.640332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.640647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.640682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.640959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.640993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.641110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.641144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.641395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.641429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.641619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.641655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.641956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.641990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.642172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.642206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.642456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.642490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.642682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.642716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.642835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.642873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.643130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.643166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.643350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.643382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.643509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.643543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.643806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.643841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.644119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.644153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.644407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.644440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.644656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.644691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.644816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.644851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.645056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.645089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.645289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.645323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.645459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.645492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.645678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.645713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.645896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.645930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.646119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.646154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.617 qpair failed and we were unable to recover it. 00:36:14.617 [2024-12-14 16:49:44.646329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.617 [2024-12-14 16:49:44.646362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.646617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.646653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.646838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.646871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.647009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.647042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.647292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.647327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.647606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.647641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.647894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.647928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.648232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.648266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.648527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.648582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.648863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.648897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.649155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.649188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.649337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.649370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.649622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.649658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.649911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.649945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.650222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.650257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.650478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.650511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.650742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.650777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.650979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.651014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.651192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.651226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.651447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.651480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.651675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.651711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.651915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.651948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.652173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.652207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.652404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.652437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.652639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.652674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.652927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.652961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.653149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.653188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.653387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.653421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.653711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.653746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.654042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.654076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.654360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.654394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.654584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.654619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.654748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.654781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.654987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.655021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.655294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.655326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.655443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.655476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.655614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.655650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.655864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.655898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.656076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.656109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.656314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.656347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.656554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.656607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.656794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.656827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.657038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.657071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.657342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.657376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.657578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.657612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.657738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.657771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.658055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.658089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.658289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.658323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.658595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.658630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.658911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.658945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.659229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.659263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.618 [2024-12-14 16:49:44.659392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.618 [2024-12-14 16:49:44.659426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.618 qpair failed and we were unable to recover it. 00:36:14.900 [2024-12-14 16:49:44.659608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.900 [2024-12-14 16:49:44.659643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.900 qpair failed and we were unable to recover it. 00:36:14.900 [2024-12-14 16:49:44.659919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.659959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.660221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.660257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.660550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.660596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.660856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.660889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.661155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.661189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.661322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.661355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.661630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.661665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.661965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.661998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.662263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.662299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.662499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.662533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.662819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.662853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.663132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.663167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.663278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.663311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.663511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.663545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.663813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.663847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.664090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.664123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.664374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.664407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.664637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.664672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.664854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.664888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.665185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.665218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.665470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.665504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.665713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.665748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.665927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.665961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.666235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.666269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.666524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.666568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.666865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.666900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.667013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.667047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.667247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.667280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.667568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.667605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.667880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.667913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.668108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.668141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.668425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.668459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.668579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.668614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.668832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.668866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.669088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.669122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.669404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.669438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.669596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.669630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.669911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.669946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.670234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.670268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.670540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.670583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.670849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.670883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.671168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.671208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.671477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.671510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.901 [2024-12-14 16:49:44.671800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.901 [2024-12-14 16:49:44.671835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.901 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.672079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.672113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.672315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.672349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.672628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.672663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.672947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.672981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.673162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.673196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.673384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.673418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.673532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.673589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.673799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.673833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.674014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.674049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.674253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.674286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.674553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.674597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.674827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.674861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.675110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.675144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.675352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.675386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.675605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.675639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.675828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.675863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.676059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.676093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.676342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.676375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.676568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.676604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.676856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.676889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.677093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.677127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.677399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.677434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.677724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.677759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.678041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.678075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.678258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.678297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.678479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.678513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.678655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.678690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.678867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.678902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.679174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.679208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.679460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.679494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.679696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.679732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.679984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.680018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.680202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.680236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.680420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.680454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.680681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.680716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.680914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.680949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.681152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.681185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.681292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.681326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.681587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.681624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.681875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.681909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.682178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.682212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.682414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.682449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.682642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.682677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.682945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.682978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.683202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.902 [2024-12-14 16:49:44.683237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.902 qpair failed and we were unable to recover it. 00:36:14.902 [2024-12-14 16:49:44.683376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.683410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.683693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.683727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.683918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.683952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.684152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.684185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.684387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.684421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.684634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.684669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.684974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.685008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.685135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.685169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.685368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.685402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.685534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.685590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.685859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.685893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.686162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.686195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.686389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.686424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.686540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.686586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.686840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.686874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.687066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.687102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.687301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.687335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.687565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.687601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.687802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.687836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.688067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.688100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.688285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.688324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.688526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.688571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.688776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.688810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.688995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.689028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.689305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.689338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.689515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.689549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.689672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.689705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.689992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.690025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.690278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.690312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.690610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.690644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.690936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.690969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.691262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.691297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.691575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.691610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.691809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.691843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.692049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.692084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.692236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.692268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.692545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.692589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.692782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.692817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.692950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.692983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.693256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.693289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.693466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.693499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.693703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.693738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.693959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.693993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.694243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.694277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.694476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.694509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.694702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.903 [2024-12-14 16:49:44.694736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.903 qpair failed and we were unable to recover it. 00:36:14.903 [2024-12-14 16:49:44.695012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.695046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.695174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.695207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.695423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.695458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.695749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.695785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.696059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.696094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.696383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.696416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.696666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.696701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.696955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.696989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.697182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.697216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.697487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.697520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.697666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.697701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.697952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.697986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.698169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.698202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.698481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.698514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.698726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.698761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.698993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.699028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.699209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.699243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.699435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.699469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.699771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.699807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.700066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.700099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.700402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.700436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.700702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.700738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.700883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.700917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.701117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.701152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.701444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.701479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.701746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.701781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.701977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.702012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.702135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.702169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.702443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.702477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.702711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.702747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.702927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.702961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.703232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.703265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.703543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.703590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.703861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.703895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.704107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.704142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.704417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.704451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.704732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.704768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.704881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.704915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.705050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.705084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.705336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.705370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.904 qpair failed and we were unable to recover it. 00:36:14.904 [2024-12-14 16:49:44.705623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.904 [2024-12-14 16:49:44.705658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.705870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.705904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.706042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.706082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.706360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.706394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.706697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.706732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.706937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.706971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.707173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.707206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.707386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.707419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.707547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.707593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.707847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.707881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.708130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.708164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.708301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.708333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.708518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.708551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.708763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.708797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.709021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.709054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.709232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.709267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.709527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.709587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.709775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.709809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.709924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.709956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.710135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.710167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.710285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.710319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.710516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.710549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.710753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.710788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.711015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.711049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.711271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.711305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.711498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.711532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.711745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.711781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.711995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.712029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.712209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.712243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.712491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.712524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.712744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.712780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.712959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.712992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.713191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.713225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.713477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.713510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.713804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.713840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.713969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.714002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.714252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.714285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.714407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.714440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.714693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.714728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.715032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.715065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.715271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.715304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.715580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.715615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.715803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.715836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.716080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.716114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.716316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.716349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.716530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.905 [2024-12-14 16:49:44.716575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.905 qpair failed and we were unable to recover it. 00:36:14.905 [2024-12-14 16:49:44.716698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.716732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.716862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.716894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.717185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.717218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.717361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.717393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.717643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.717676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.717937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.717969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.718153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.718185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.718384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.718415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.718592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.718628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.718879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.718913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.719207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.719239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.719566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.719600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.719782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.719816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.720090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.720122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.720402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.720435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.720722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.720757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.721027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.721060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.721299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.721332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.721609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.721644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.721895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.721928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.722203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.722236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.722519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.722552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.722703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.722737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.722987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.723021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.723297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.723335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.723543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.723605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.723858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.723894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.724075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.724109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.724303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.724337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.724540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.724588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.724845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.724879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.725003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.725035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.725169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.725202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.725325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.725359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.725574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.725610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.725788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.725821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.726093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.726127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.726330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.726363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.726632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.726669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.726798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.726833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.727041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.727075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.727237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.727270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.727545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.727594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.727898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.727933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.728085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.728119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.728310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.906 [2024-12-14 16:49:44.728343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.906 qpair failed and we were unable to recover it. 00:36:14.906 [2024-12-14 16:49:44.728458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.728489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.728696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.728731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.728914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.728948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.729079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.729112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.729364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.729397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.729667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.729701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.729833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.729866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.730065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.730098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.730288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.730320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.730512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.730545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.730764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.730799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.731023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.731057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.731251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.731283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.731469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.731502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.731629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.731664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.731789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.731820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.732030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.732065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.732202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.732236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.732442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.732473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.732603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.732643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.732838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.732872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.732997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.733031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.733150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.733183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.733456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.733489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.733783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.733818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.733941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.733975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.734091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.734129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.734378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.734409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.734587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.734620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.734873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.734906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.735110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.735144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.735361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.735393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.735618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.735650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.735935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.735971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.736253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.736286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.736601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.736636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.736888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.736923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.737103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.737135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.737338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.737372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.737644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.737679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.737873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.737906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.738030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.738062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.738175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.738207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.738401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.738435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.738637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.738673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.738856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.907 [2024-12-14 16:49:44.738889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.907 qpair failed and we were unable to recover it. 00:36:14.907 [2024-12-14 16:49:44.739160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.739199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.739389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.739422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.739628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.739662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.739774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.739806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.740076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.740109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.740307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.740339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.740462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.740494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.740812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.740849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.741071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.741105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.741319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.741353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.741543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.741587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.741770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.741804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.741984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.742018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.742233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.742268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.742536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.742585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.742774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.742808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.743010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.743044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.743182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.743215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.743405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.743438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.743618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.743654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.743780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.743813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.743988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.744020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.744246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.744280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.744537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.744593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.744849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.744883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.745077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.745110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.745230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.745262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.745514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.745546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.745764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.745797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.746057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.746090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.746203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.746235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.746440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.746475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.746658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.746693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.746839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.746870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.747067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.747099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.747293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.747325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.747438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.747471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.747746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.747783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.748007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.748042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.748267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.748306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.748418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.908 [2024-12-14 16:49:44.748451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.908 qpair failed and we were unable to recover it. 00:36:14.908 [2024-12-14 16:49:44.748774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.748819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.749077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.749113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.749255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.749289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.749519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.749554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.749755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.749790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.750091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.750125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.750329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.750362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.750618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.750654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.750843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.750877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.751158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.751192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.751309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.751343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.751525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.751573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.751777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.751811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.752017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.752050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.752237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.752271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.752451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.752484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.752714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.752750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.752931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.752965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.753145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.753178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.753407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.753441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.753630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.753666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.753852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.753888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.754099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.754136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.754414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.754450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.754650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.754686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.754885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.754921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.755110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.755143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.755418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.755460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.755658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.755694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.755947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.755982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.756180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.756214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.756340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.756373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.756581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.756618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.756824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.756858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.757038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.757071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.757349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.757384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.757501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.757535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.757742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.757777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.758005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.758038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.758245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.758279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.758535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.758582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.758824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.758905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.759164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.759203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.759394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.759428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.759682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.909 [2024-12-14 16:49:44.759718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.909 qpair failed and we were unable to recover it. 00:36:14.909 [2024-12-14 16:49:44.759958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.759993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.760249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.760283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.760491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.760524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.760725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.760762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.761032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.761066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.761254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.761287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.761413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.761446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.761645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.761680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.761864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.761898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.762123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.762167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.762368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.762402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.762520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.762554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.762769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.762802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.763054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.763088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.763282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.763315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.763494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.763527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.763651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.763687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.763869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.763901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.764152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.764185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.764458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.764491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.764801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.764836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.765092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.765126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.765460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.765495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.765793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.765829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.766092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.766127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.766427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.766460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.766734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.766771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.766930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.766963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.767217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.767250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.767371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.767405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.767680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.767716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.767992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.768026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.768314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.768349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.768485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.768519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.768717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.768751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.768882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.768916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.769054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.769094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.769282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.769316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.769492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.769526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.769592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf9c70 (9): Bad file descriptor 00:36:14.910 [2024-12-14 16:49:44.769868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.769906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.770109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.770143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.770323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.770357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.770535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.770581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.770761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.770796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.770975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.910 [2024-12-14 16:49:44.771010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.910 qpair failed and we were unable to recover it. 00:36:14.910 [2024-12-14 16:49:44.771202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.771235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.771508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.771543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.771688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.771722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.771910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.771943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.772223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.772302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.772573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.772648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.772901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.772939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.773125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.773159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.773337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.773372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.773646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.773680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.773934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.773968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.774086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.774120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.774320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.774353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.774605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.774640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.774904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.774937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.775143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.775177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.775452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.775487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.775620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.775655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.775911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.775945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.776208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.776242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.776455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.776490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.776691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.776726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.776910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.776943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.777122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.777155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.777332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.777366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.777673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.777709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.777887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.777921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.778170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.778205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.778385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.778418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.778623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.778657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.778917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.778951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.779210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.779250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.779502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.779536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.779734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.779769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.779882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.779915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.780090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.780124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.780398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.780432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.780713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.780748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.781032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.781067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.781345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.781379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.781608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.781642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.781824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.781857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.782037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.782070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.782272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.782305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.782577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.782612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.782746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.782780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.911 qpair failed and we were unable to recover it. 00:36:14.911 [2024-12-14 16:49:44.783054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.911 [2024-12-14 16:49:44.783087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.783363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.783396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.783685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.783720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.783899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.783933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.784122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.784155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.784341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.784375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.784580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.784617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.784881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.784914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.785194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.785228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.785450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.785483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.785662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.785699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.785975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.786009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.786324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.786365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.786666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.786702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.786895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.786929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.787127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.787162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.787351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.787385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.787670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.787704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.787963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.787997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.788273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.788308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.788599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.788634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.788842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.788876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.789078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.789112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.789228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.789261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.789440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.789473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.789684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.789720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.789999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.790033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.790315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.790349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.790630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.790665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.790871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.790904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.791082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.791116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.791315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.791349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.791591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.791627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.791814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.791848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.792026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.792058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.792178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.792212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.792394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.792427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.792701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.792736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.792916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.792951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.793126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.793159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.793345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.793380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.912 qpair failed and we were unable to recover it. 00:36:14.912 [2024-12-14 16:49:44.793610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.912 [2024-12-14 16:49:44.793646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.793915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.793949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.794151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.794185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.794436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.794470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.794667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.794703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.794975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.795008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.795291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.795324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.795584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.795619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.795750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.795783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.795895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.795929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.796050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.796083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.796260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.796295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.796505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.796545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.796789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.796823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.797153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.797187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.797442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.797476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.797728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.797764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.797879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.797912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.798164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.798198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.798319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.798353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.798541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.798584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.798838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.798872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.799002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.799035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.799217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.799251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.799374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.799407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.799676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.799711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.799903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.799938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.800213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.800246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.800448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.800481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.800682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.800718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.800970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.801003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.801205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.801239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.801417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.801451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.801629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.801688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.801901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.801935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.802215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.802248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.802431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.802465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.802652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.802686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.802968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.803001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.803220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.803261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.803442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.803476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.803753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.803788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.804014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.804049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.804243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.804276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.804410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.804443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.804665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.804701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.913 [2024-12-14 16:49:44.804886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.913 [2024-12-14 16:49:44.804920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.913 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.805115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.805148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.805329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.805363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.805542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.805589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.805843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.805877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.806055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.806090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.806379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.806412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.806580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.806617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.806804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.806837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.807020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.807052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.807335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.807368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.807687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.807723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.807872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.807906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.808135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.808168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.808348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.808382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.808581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.808615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.808726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.808759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.808946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.808979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.809097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.809130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.809308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.809341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.809613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.809647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.809978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.810012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.810298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.810332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.810455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.810488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.810693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.810727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.811002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.811035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.811146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.811180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.811472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.811505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.811709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.811745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.812020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.812054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.812337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.812370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.812554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.812600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.812785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.812818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.812998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.813032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.813302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.813341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.813628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.813663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.813932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.813966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.814102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.814135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.814409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.814441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.814623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.814657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.814857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.814890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.815092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.815126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.815310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.815342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.815518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.815551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.815818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.815853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.914 [2024-12-14 16:49:44.816107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.914 [2024-12-14 16:49:44.816140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.914 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.816318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.816351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.816530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.816572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.816856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.816890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.817173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.817206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.817486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.817518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.817810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.817845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.818031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.818065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.818243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.818276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.818455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.818489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.818773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.818808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.819028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.819063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.819252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.819285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.819543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.819587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.819722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.819756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.819877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.819912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.820091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.820131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.820355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.820388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.820639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.820675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.820951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.820985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.821195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.821229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.821409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.821443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.821672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.821707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.821887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.821921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.822170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.822204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.822347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.822381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.822595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.822630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.822909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.822942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.823204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.823238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.823387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.823421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.823699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.823735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.823919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.823953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.824219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.824253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.824432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.824466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.824587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.824623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.824874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.824908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.825183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.825216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.825356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.825390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.825524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.825565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.825750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.825783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.825962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.825997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.826177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.826210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.826498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.826532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.826826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.826861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.827015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.827050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.827240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.827274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.915 qpair failed and we were unable to recover it. 00:36:14.915 [2024-12-14 16:49:44.827395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.915 [2024-12-14 16:49:44.827430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.827644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.827682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.827978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.828012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.828298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.828332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.828538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.828581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.828764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.828798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.829049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.829083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.829354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.829388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.829598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.829633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.829887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.829921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.830217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.830250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.830430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.830470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.830753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.830788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.830987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.831021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.831224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.831258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.831518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.831552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.831748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.831783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.831962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.831996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.832254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.832287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.832409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.832443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.832723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.832758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.832935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.832969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.833146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.833180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.833466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.833501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.833706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.833742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.833932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.833965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.834153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.834187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.834367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.834402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.834525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.834583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.834840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.834874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.835125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.835158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.835453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.835486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.835707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.835742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.836011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.836045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.836243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.836276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.836536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.836583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.836836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.836870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.916 [2024-12-14 16:49:44.837066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.916 [2024-12-14 16:49:44.837100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.916 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.837211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.837244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.837500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.837535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.837741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.837776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.838050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.838083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.838264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.838297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.838407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.838440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.838719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.838754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.838947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.838981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.839159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.839193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.839390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.839424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.839725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.839759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.839939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.839973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.840165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.840199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.840478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.840511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.840739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.840775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.841074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.841109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.841290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.841323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.841609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.841644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.841923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.841957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.842139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.842172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.842292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.842325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.842547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.842610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.842802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.842836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.843106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.843140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.843414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.843447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.843739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.843774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.843959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.843993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.844174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.844208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.844489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.844522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.844813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.844848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.844958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.844992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.845170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.845204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.845476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.845509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.845845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.845880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.845992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.846025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.846163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.846196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.846404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.846438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.846621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.846656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.846860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.846893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.847016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.847049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.847252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.847285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.847538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.847589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.847704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.847738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.847915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.847949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.848165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.917 [2024-12-14 16:49:44.848198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.917 qpair failed and we were unable to recover it. 00:36:14.917 [2024-12-14 16:49:44.848308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.848341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.848544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.848588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.848798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.848832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.849106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.849140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.849356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.849389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.849499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.849532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.849725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.849759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.850009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.850042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.850221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.850255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.850432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.850465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.850692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.850728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.850918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.850951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.851131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.851165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.851345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.851378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.851582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.851618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.851880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.851913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.852190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.852222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.852413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.852447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.852584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.852619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.852896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.852929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.853107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.853140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.853409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.853442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.853652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.853687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.853961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.853994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.854181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.854215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.854419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.854452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.854634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.854669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.854884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.854917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.855183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.855216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.855344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.855378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.855571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.855606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.855763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.855797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.856001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.856034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.856283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.856316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.856455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.856488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.856673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.856708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.856985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.857018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.857206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.857245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.857428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.857462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.857737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.857772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.857911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.857944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.858125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.858159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.858431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.858464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.858748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.858783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.858965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.858998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.859274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.918 [2024-12-14 16:49:44.859307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.918 qpair failed and we were unable to recover it. 00:36:14.918 [2024-12-14 16:49:44.859568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.859603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.859900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.859933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.860155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.860188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.860397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.860431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.860614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.860648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.860839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.860872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.861143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.861177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.861436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.861470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.861693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.861727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.861861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.861894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.862034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.862066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.862258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.862292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.862572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.862607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.862735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.862768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.862949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.862982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.863254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.863287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.863487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.863521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.863718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.863753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.864009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.864048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.864159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.864192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.864375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.864408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.864588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.864622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.864898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.864931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.865125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.865158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.865334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.865366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.865620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.865655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.865778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.865811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.866087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.866120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.866299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.866332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.866472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.866504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.866793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.866827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.867012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.867046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.867229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.867264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.867441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.867474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.867686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.867720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.867840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.867873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.868072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.868105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.868377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.868410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.868589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.868623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.868822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.868856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.869036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.869069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.869340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.869373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.869655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.869689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.869884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.869917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.870189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.870223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.870348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.919 [2024-12-14 16:49:44.870381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.919 qpair failed and we were unable to recover it. 00:36:14.919 [2024-12-14 16:49:44.870579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.870614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.870825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.870859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.870976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.871009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.871221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.871254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.871479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.871512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.871778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.871812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.871948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.871981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.872279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.872312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.872531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.872578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.872835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.872868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.873056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.873089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.873287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.873320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.873609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.873645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.873827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.873866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.874047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.874081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.874351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.874384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.874540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.874583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.874766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.874799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.874937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.874970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.875147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.875181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.875446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.875479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.875687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.875722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.875993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.876026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.876146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.876179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.876291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.876321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.876517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.876549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.876683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.876717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.876906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.876939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.877123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.877157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.877373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.877407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.877574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.877611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.877904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.877943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.878075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.878109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.878307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.878341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.878616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.878653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.878862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.878897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.879150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.879184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.879367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.879401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.879602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.879637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.879837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.879871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.880054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.920 [2024-12-14 16:49:44.880096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.920 qpair failed and we were unable to recover it. 00:36:14.920 [2024-12-14 16:49:44.880349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.880383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.880687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.880722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.880986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.881020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.881240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.881273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.881547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.881598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.881779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.881814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.882007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.882041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.882337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.882371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.882581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.882617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.882814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.882849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.883031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.883064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.883191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.883225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.883412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.883446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.883817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.883897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.884199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.884238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.884440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.884476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.884680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.884718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.884998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.885033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.885162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.885196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.885400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.885435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.885622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.885658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.885796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.885830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.886154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.886187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.886450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.886483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.886686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.886721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.886848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.886882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.887077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.887122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.887261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.887294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.887478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.887512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.887774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.887809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.887992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.888025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.888324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.888358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.888575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.888610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.888883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.888917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.889138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.889172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.889475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.889509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.889704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.889739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.889941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.889975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.890162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.890196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.890412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.890445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.890719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.890755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.890937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.890971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.891106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.891140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.891277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.891311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.891493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.891526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.921 qpair failed and we were unable to recover it. 00:36:14.921 [2024-12-14 16:49:44.891663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.921 [2024-12-14 16:49:44.891698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.891976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.892011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.892195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.892229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.892359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.892392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.892628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.892663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.892945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.892979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.893115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.893148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.893448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.893482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.893748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.893784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.893980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.894014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.894290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.894323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.894604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.894641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.894822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.894856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.895132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.895166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.895411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.895446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.895713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.895749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.895949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.895982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.896165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.896199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.896379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.896414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.896592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.896626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.896807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.896841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.897040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.897080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.897261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.897294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.897573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.897606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.897862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.897895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.898036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.898070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.898182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.898215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.898421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.898455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.898654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.898689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.898813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.898848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.899030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.899063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.899265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.899299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.899503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.899537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.899659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.899692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.899966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.900001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.900250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.900284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.900484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.900517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.900810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.900846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.901028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.901061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.901333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.901366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.901576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.901610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.901799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.901833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.901954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.901987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.902174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.902208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.902467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.902501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.922 [2024-12-14 16:49:44.902707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.922 [2024-12-14 16:49:44.902742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.922 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.902944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.902978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.903182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.903216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.903428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.903463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.903647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.903682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.903800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.903833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.904082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.904117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.904317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.904350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.904506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.904540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.904751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.904785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.905036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.905070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.905249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.905283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.905485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.905519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.905690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.905730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.905861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.905895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.906088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.906121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.906371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.906411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.906708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.906743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.906967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.907002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.907199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.907233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.907505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.907539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.907803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.907837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.908019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.908053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.908326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.908360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.908610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.908645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.908827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.908861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.909114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.909148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.909328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.909362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.909539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.909581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.909833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.909867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.910156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.910190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.910465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.910499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.910782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.910817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.911096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.911130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.911319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.911352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.911541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.911585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.911793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.911827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.912008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.912042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.912224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.912258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.912536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.912577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.912796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.912830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.913013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.913047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.913267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.913302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.913555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.913599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.913786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.913820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.913996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.923 [2024-12-14 16:49:44.914031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.923 qpair failed and we were unable to recover it. 00:36:14.923 [2024-12-14 16:49:44.914310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.914345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.914467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.914500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.914696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.914730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.914978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.915011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.915288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.915322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.915525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.915572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.915774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.915808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.915991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.916025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.916207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.916241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.916513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.916548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.916838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.916876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.916995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.917028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.917281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.917315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.917590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.917626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.917906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.917941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.918160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.918194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.918443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.918477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.918670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.918705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.918886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.918920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.919189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.919223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.919497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.919531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.919776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.919812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.920016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.920050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.920299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.920333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.920637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.920673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.920870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.920903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.921115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.921150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.921295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.921329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.921513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.921547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.921843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.921877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.922011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.922049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.922170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.922202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.922466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.922500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.922783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.922819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.923023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.923056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.923351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.923385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.923518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.923553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.923826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.923861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.924139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.924173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.924309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.924344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.924 qpair failed and we were unable to recover it. 00:36:14.924 [2024-12-14 16:49:44.924527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.924 [2024-12-14 16:49:44.924571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.924781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.924816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.925063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.925099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.925286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.925320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.925519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.925552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.925775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.925810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.925989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.926025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.926203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.926237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.926457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.926491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.926754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.926790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.926919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.926959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.927239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.927275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.927412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.927446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.927575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.927610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.927886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.927920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.928117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.928152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.928352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.928386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.928540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.928584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.928781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.928815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.929065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.929100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.929399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.929433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.929637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.929671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.929875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.929909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.930114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.930149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.930338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.930372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.930578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.930614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.930753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.930787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.930988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.931022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.931227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.931261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.931540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.931586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.931717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.931751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.931947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.931980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.932160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.932193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.932465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.932499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.932709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.932744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.932941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.932976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.933234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.933268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.933408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.933448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.933582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.933617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.933866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.933899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.934199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.934236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.934445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.934479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.934631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.934667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.934946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.934980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.935278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.935312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.925 [2024-12-14 16:49:44.935608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.925 [2024-12-14 16:49:44.935644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.925 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.935925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.935960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.936162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.936195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.936318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.936353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.936533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.936576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.936737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.936771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.937027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.937063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.937344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.937379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.937728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.937764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.938020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.938055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.938185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.938219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.938472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.938505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.938630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.938665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.938849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.938885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.939141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.939176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.939480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.939514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.939744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.939781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.939977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.940013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.940304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.940338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.940610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.940647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.940841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.940875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.941055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.941088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.941298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.941332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.941589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.941625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.941814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.941848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.942060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.942092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.942362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.942396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.942504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.942538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.942761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.942795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.942975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.943008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.943216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.943249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.943431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.943465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.943667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.943707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.943980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.944014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.944306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.944340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.944542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.944586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.944769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.944803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.945078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.945112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.945243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.945276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.945464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.945498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.945664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.945698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.945882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.945915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.946098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.946133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.946439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.946472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.946733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.946769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.947057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.926 [2024-12-14 16:49:44.947091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.926 qpair failed and we were unable to recover it. 00:36:14.926 [2024-12-14 16:49:44.947303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.947337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.947530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.947573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.947799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.947832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.948009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.948043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.948180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.948214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.948464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.948497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.948695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.948729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.948943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.948976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.949190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.949224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.949475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.949508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.949817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.949852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.950105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.950138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.950271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.950304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.950582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.950619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.950754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.950787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.950988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.951021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.951203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.951236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.951353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.951386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.951641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.951675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.951859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.951893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.952048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.952081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.952387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.952420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.952709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.952743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.952948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.952982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.953099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.953132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.953414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.953447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.953737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.953783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.953989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.954022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.954239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.954272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.954568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.954603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.954866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.954900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.955102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.955134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.955399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.955432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.955637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.955673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.955952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.955984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.956210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.956244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.956427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.956460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.956593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.956627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.956809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.956843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.957096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.957129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.957386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.957419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.957621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.927 [2024-12-14 16:49:44.957655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.927 qpair failed and we were unable to recover it. 00:36:14.927 [2024-12-14 16:49:44.957884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.957918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.958116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.958149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.958401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.958438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.958697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.958729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.958865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.958896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.959167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.959198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.959448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.959479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.959712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.959746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.959954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.959987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.960284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.960317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.960585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.960618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.960788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.960820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.961011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.961042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.961238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.961269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.961395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.961429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.961574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.961605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.961788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.961820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.962022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.962055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.962287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.962318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.962592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.962624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.962819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.962852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.963007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.963039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.963192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:14.928 [2024-12-14 16:49:44.963223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:14.928 qpair failed and we were unable to recover it. 00:36:14.928 [2024-12-14 16:49:44.963449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.963481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.963677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.963716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.963898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.963930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.964146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.964193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.964474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.964516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.964791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.964829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.964990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.965022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.965309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.965358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.965692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.965739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.965994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.966031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.966340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.966375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.966513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.966548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.966778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.966822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.967042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.967081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.967412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.967445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.967580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.967617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.967870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.967902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.968107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.968139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.968420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.968455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.968664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.968698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.968895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.968927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.969140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.969173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.969356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.969388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.969588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.969621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.969807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.969841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.969997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.970029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.970303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.970335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.970566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.970600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.970744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.970777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.970892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.970923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.971118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.971149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.971442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.971474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.971613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.971647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.971789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.971822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.972004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.972035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.972255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.972288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.972519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.972552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.972800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.972846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.972970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.973003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.973332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.973366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.973603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.973637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.973836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.973874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.974015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.974049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.974282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.974315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.974532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.974572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.974833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.974866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.975128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.975161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.975297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.975330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.975607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.975642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.975831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.975865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.976094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.976126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.976252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.976291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.976494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.976569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.976720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.976753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.976891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.976925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.977283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.977318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.977438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.977471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.977708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.977746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.978030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.978065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.978284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.978316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.978544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.978592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.978728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.978767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.978900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.978933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.979220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.979256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.979457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.979490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.979689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.979723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.979924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.979957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.980145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.980178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.980372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.980405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.980612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.980649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.980778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.980812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.980942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.980975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.981185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.981220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.981345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.981379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.981589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.981624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.981792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.981825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.982047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.982081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.982282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.982321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.982465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.982500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.982662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.982698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.982880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.982912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.983122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.983164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.983376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.983411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.983622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.983656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.983791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.983827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.984035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.984067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.984308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.984342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.216 [2024-12-14 16:49:44.984476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.216 [2024-12-14 16:49:44.984510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.216 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.984757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.984798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.984994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.985027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.985285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.985318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.985528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.985575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.985693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.985726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.985917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.985949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.986150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.986182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.986462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.986494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.986755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.986789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.986984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.987015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.987145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.987177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.987369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.987404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.987535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.987608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.987740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.987775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.987926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.987962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.988190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.988227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.988441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.988476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.988594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.988629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.988917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.988952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.989141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.989174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.989389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.989429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.989682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.989718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.989853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.989886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.990062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.990095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.990388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.990439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.990577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.990611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.990831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.990865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.990989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.991021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.991295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.991328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.991443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.991481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.991662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.991716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.992049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.992082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.992275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.992310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.992657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.992699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.992851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.992885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.993014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.993047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.993336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.993371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.993494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.993527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.993676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.993717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.993919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.993954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.994147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.994179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.994325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.994366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.994600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.994635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.994763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.994795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.995001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.995036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.995159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.995192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.995382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.995434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.995856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.995912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.996046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.996079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.996327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.996366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.996642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.996679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.996864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.996897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.997021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.997054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.997372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.997404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.997534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.997578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.997815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.997850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.998052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.998085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.998305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.998340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.998570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.998607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.998794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.998841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.999000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.999034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.999156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.999195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.999386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.999420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.999532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.999577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:44.999804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:44.999854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.000048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.000088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.000348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.000383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.000588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.000625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.000744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.000792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.001079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.001112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.001313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.001360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.001575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.001609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.001834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.001867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.002122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.002171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.002386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.002419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.002631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.002671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.002852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.002886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.003082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.003118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1209093 Killed "${NVMF_APP[@]}" "$@" 00:36:15.217 [2024-12-14 16:49:45.003315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.003363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.003575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.003609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.003731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.003783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:15.217 [2024-12-14 16:49:45.003990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.004024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.004271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.004303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:15.217 [2024-12-14 16:49:45.004606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.004643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:15.217 [2024-12-14 16:49:45.004848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.004883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:15.217 [2024-12-14 16:49:45.005021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.005055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.217 [2024-12-14 16:49:45.005344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.005377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.005648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.005683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.005960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.005995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.006203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.006236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.006440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.006474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.006689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.006723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.006977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.007009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.007155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.007187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.007385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.007417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.217 qpair failed and we were unable to recover it. 00:36:15.217 [2024-12-14 16:49:45.007575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.217 [2024-12-14 16:49:45.007610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.007734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.007766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.007900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.007939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.008259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.008291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.008477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.008511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.008731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.008765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.008906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.008938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.009149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.009181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.009402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.009434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.009554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.009625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.009799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.009831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.010013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.010048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.010370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.010404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.010601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.010639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.010786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.010821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.011041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.011076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.011208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.011243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.011508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.011542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.011688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.011729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.011866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.011901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.012054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.012089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.012218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.012253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.012456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.012492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1209823 00:36:15.218 [2024-12-14 16:49:45.012728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.012765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1209823 00:36:15.218 [2024-12-14 16:49:45.012977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.013012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:15.218 [2024-12-14 16:49:45.013162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.013197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.013317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1209823 ']' 00:36:15.218 [2024-12-14 16:49:45.013351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.013493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.013529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.218 [2024-12-14 16:49:45.013671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.013707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.013853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.013889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.218 [2024-12-14 16:49:45.014017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.014052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.218 [2024-12-14 16:49:45.014284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.014319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.014432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.014466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.218 [2024-12-14 16:49:45.014722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.014760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.218 [2024-12-14 16:49:45.014967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.015004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.015136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.015171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.015379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.015414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.015604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.015654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.015799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.015832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.016014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.016064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.016268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.016305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.016435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.016471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.016671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.016709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.016848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.016888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.017012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.017049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.017165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.017199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.017324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.017359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.017546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.017618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.017752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.017786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.017915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.017949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.018153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.018195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.018392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.018445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.018629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.018668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.018870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.018906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.019113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.019146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.019343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.019383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.019656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.019694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.019865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.019902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.020098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.020132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.020272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.020325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.020469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.020504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.020688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.020725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.020856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.020891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.021035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.021071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.021306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.021350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.021592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.021630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.021755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.021802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.022075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.022112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.022278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.022314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.022436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.022471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.022686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.022729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.022959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.022996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.023203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.023240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.023436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.023471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.023637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.023674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.023874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.023911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.024036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.024071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.024261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.024297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.024589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.024627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.024772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.024807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.024937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.024972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.025235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.025272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.025554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.025606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.025729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.025764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.025890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.025925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.026135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.026173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.026371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.026418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.026602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.026639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.026831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.026867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.027075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.027109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.027424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.027462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.027623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.027661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.027874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.027909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.028070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.028107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.028225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.028260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.218 qpair failed and we were unable to recover it. 00:36:15.218 [2024-12-14 16:49:45.028465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.218 [2024-12-14 16:49:45.028502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.028823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.028861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.029106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.029143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.029293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.029329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.029582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.029642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.029850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.029885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.030002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.030037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.030188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.030232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.030392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.030427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.030660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.030706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.030841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.030882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.031020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.031067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.031181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.031216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.031415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.031450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.031639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.031676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.031899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.031934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.032155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.032209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.032411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.032448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.032648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.032684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.032838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.032873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.033099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.033134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.033414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.033448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.033628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.033664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.033807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.033842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.033981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.034017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.034312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.034349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.034554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.034607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.034746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.034786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.034968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.035026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.035227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.035267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.035451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.035486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.035613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.035649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.035788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.035824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.036030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.036065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.036213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.036248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.036431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.036465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.036681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.036764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.036942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.036983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.037272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.037307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.037516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.037550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.037825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.037862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.038068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.038102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.038308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.038344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.038549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.038598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.038746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.038781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.038985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.039020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.039134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.039168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.039372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.039406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.039611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.039653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.039810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.039855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.039989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.040023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.040150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.040187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.040311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.040345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.040470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.040513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.040641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.040676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.040812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.040856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.040997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.041031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.041232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.041269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.041379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.041413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.041530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.041574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.041767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.041802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.041984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.042019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.042130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.042165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.042321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.042356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.042540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.042600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.042743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.042778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.042934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.042969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.043098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.043132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.043263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.043299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.043418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.043454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.043589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.043624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.043764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.043798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.043994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.044029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.044143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.044177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.044381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.044415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.044543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.044587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.044829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.044909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.045124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.045162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.045310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.045346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.045578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.045616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.045752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.045787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.045911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.045945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.046064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.046099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.046216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.046251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.046373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.046406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.046550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.046595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.046793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.046827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.046951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.046985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.047104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.047139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.047265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.047309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.047486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.047520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.047727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.047762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.047948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.047983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.048112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.048146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.048344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.048379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.048579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.048615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.048836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.048874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.049007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.049041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.049155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.049190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.049310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.049344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.049541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.049588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.049852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.049887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.050206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.050241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.219 qpair failed and we were unable to recover it. 00:36:15.219 [2024-12-14 16:49:45.050471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.219 [2024-12-14 16:49:45.050507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.050732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.050768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.050983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.051018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.051256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.051290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.051584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.051620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.051853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.051889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.052083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.052118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.052310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.052344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.052546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.052589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.052865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.052900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.053108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.053142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.053407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.053442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.053730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.053766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.054010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.054090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.054348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.054388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.054609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.054648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.054777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.054811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.055016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.055052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.055265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.055299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.055422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.055457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.055700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.055737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.055871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.055906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.056064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.056098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.056234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.056268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.056574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.056610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.056761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.056795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.056992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.057025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.057268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.057303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.057426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.057460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.057659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.057694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.057990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.058023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.058150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.058185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.058366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.058401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.058584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.058621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.058750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.058784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.059016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.059051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.059177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.059210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.059464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.059498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.059636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.059672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.059900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.059934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.060149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.060190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.060394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.060428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.060609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.060645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.060755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.060789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.061013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.061047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.061271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.061306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.061446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.061479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.061704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.061740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.062020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.062055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.062335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.062369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.062622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.062657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.062933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.062967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.063250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.063285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.063431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.063465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.063607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.063644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.063772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.063805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.063929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.063963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.064076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.064112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.064315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.064348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.064388] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:15.220 [2024-12-14 16:49:45.064436] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:15.220 [2024-12-14 16:49:45.064533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.064575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.064770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.064802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.064916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.064946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.065243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.065276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.065401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.065434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.065617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.065654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.065774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.065807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.065988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.066028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.066181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.066216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.066340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.066374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.066655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.066691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.066819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.066853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.066988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.067022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.067294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.067329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.067456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.067492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.067662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.067697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.067844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.067878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.068009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.068044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.068268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.068304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.068487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.068523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.068741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.068781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.068930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.068965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.069151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.069185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.069509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.069543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.069837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.069874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.070023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.070057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.070312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.070346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.070529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.070571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.070684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.070718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.070846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.070880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.071064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.071097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.071317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.071351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.071486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.071519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.071728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.071764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.071945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.071985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.072121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.072156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.072335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.072368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.072567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.072604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.072727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.072761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.072973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.073006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.220 [2024-12-14 16:49:45.073316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.220 [2024-12-14 16:49:45.073351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.220 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.073539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.073584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.073785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.073819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.073929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.073961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.074165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.074200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.074380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.074413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.074685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.074720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.074855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.074890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.075015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.075049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.075312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.075346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.075578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.075614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.075805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.075840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.075969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.076003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.076223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.076258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.076454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.076488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.076707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.076742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.076852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.076886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.077034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.077068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.077347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.077380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.077668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.077703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.077900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.077935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.078060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.078095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.078286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.078320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.078526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.078573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.078697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.078732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.078864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.078898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.079083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.079118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.079316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.079350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.079636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.079673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.079805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.079838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.080063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.080096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.080342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.080374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.080497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.080530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.080658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.080692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.080887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.080926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.081111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.081144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.081397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.081431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.081611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.081644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.081850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.081883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.082031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.082064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.082368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.082402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.082646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.082680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.082825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.082858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.083103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.083138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.083416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.083449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.083584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.083619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.083823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.083856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.084002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.084035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.084255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.084289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.084533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.084574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.084760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.084794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.084981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.085014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.085271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.085304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.085600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.085635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.085897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.085931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.086052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.086087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.086244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.086277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.086592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.086626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.086824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.086857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.086987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.087020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.087199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.087233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.087433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.087467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.087679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.087714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.087892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.087924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.088125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.088158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.088354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.088387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.088530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.088572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.088754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.088786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.088928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.088961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.089113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.089146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.089436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.089469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.089670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.089704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.089882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.089916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.090141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.090173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.090440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.090484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.090619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.090653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.090890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.090923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.091034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.091065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.091271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.091304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.091547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.091594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.091721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.091753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.091972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.092004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.092254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.092288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.092505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.092539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.092689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.092722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.092866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.092899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.093008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.093041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.221 qpair failed and we were unable to recover it. 00:36:15.221 [2024-12-14 16:49:45.093173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.221 [2024-12-14 16:49:45.093205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.093330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.093363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.093540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.093586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.093770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.093803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.094031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.094065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.094245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.094279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.094456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.094489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.094616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.094652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.094860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.094893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.095076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.095109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.095229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.095262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.095468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.095501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.095693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.095726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.095959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.095992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.096189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.096223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.096411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.096444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.096590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.096624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.096821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.096853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.097116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.097150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.097337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.097371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.097655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.097690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.097878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.097912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.098087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.098121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.098322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.098356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.098503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.098538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.098664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.098699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.098848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.098881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.099074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.099113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.099406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.099440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.099573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.099607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.099742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.099776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.100023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.100055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.100270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.100303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.100520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.100552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.100711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.100744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.101005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.101040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.101165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.101198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.101450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.101485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.101676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.101714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.101831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.101865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.101973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.102004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.102325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.102360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.102502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.102536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.102761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.102795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.103038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.103072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.103295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.103329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.103579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.103613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.103740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.103774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.103972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.104007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.104273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.104307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.104490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.104523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.104820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.104855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.105116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.105150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.105342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.105376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.105532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.105577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.105849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.105882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.106083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.106115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.106350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.106384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.106568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.106603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.106796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.106830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.107022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.107055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.107254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.107288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.107484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.107518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.107771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.107805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.108012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.108046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.108265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.108298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.108486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.108519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.108750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.108794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.108920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.108953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.109125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.109159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.109436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.109471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.109688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.109724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.109863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.109896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.110092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.110125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.110351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.110384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.110575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.110609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.110752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.110785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.110912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.110945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.111073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.111106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.111284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.111317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.111602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.111635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.111909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.111943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.112248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.112293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.112551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.112614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.112803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.112836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.113056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.113088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.113224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.113256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.113553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.113596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.113713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.113746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.113929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.113961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.114143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.114175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.114374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.114407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.114583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.114618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.114744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.114776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.114985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.115037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.115274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.115324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.115470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.115523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.115840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.115875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.116100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.222 [2024-12-14 16:49:45.116132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.222 qpair failed and we were unable to recover it. 00:36:15.222 [2024-12-14 16:49:45.116351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.116383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.116567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.116600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.116863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.116896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.117147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.117181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.117371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.117405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.117599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.117634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.117750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.117784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.117986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.118020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.118287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.118326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.118600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.118635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.118839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.118873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.119062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.119094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.119359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.119393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.119595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.119630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.119746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.119779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.120011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.120044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.120242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.120275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.120478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.120512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.120719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.120752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.120888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.120921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.121162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.121196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.121330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.121361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.121589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.121623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.121896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.121930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.122104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.122137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.122312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.122345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.122546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.122588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.122768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.122801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.123050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.123083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.123200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.123233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.123416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.123449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.123678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.123713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.123907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.123939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.124114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.124146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.124319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.124352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.124564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.124610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.124809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.124844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.125071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.125103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.125352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.125385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.125643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.125678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.125880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.125913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.126094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.126126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.126335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.126368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.126571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.126605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.126732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.126765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.126890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.126923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.127193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.127226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.127403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.127436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.127615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.127650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.127859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.127892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.128014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.128047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.128176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.128209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.128487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.128520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.128729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.128763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.128955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.128988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.129203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.129236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.129427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.129461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.129642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.129676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.129916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.129949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.130177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.130209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.130326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.130359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.130576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.130611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.130849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.130889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.131091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.131124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.131314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.131348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.131476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.131510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.131693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.131727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.131903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.131935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.132121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.132154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.132363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.132396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.132579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.132613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.132826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.132859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.133034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.133067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.133265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.133298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.133467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.133500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.133761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.133795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.133993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.134027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.134208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.134241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.134439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.134472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.134675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.134709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.134881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.134914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.223 [2024-12-14 16:49:45.135130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.223 [2024-12-14 16:49:45.135162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.223 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.135329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.135362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.135502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.135535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.135783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.135816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.135931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.135965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.136070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.136103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.136361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.136394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.136573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.136607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.136721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.136756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.136875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.136907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.137082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.137115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.137374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.137407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.137595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.137629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.137811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.137844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.138013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.138045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.138258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.138292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.138414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.138447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.138712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.138747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.138984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.139017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.139217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.139249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.139508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.139542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.139743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.139776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.139995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.140034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.140210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.140244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.140535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.140582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.140779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.140812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.141105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.141138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.141377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.141410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.141601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.141635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.141758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.141790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.142006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.142040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.142166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.142199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.142376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.142408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.142601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.142634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.142804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.142835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.142960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.143002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.143218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.143251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.143373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.143405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.143643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.143677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.143848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.143881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.144001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.144034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.144209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.144241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.144408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.144441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.144581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.144615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.144808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.144841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.145082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.145114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.145234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.145265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.145468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.145500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.145712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.145745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.145932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.145963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.146009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:15.224 [2024-12-14 16:49:45.146084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.146115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.146285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.146317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.146488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.146520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.146742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.146776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.146907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.146940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.147238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.147270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.147507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.147539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.147744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.147777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.147911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.147945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.148115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.148147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.148346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.148379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.148554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.148599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.148715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.148748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.148987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.149020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.149206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.149239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.149410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.149442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.149573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.149606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.149734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.149767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.150024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.150057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.150316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.150348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.150664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.150699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.150892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.150925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.151243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.151277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.151469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.151502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.151685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.151719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.151906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.151945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.152222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.152256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.152378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.152411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.152585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.152619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.152827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.152860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.152968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.153000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.153123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.153157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.153394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.153428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.153619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.153654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.153866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.153900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.154032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.154066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.154182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.154216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.154452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.154486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.154734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.154768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.154949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.154983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.224 qpair failed and we were unable to recover it. 00:36:15.224 [2024-12-14 16:49:45.155233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.224 [2024-12-14 16:49:45.155269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.155482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.155516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.155798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.155834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.156005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.156039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.156163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.156196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.156386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.156420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.156634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.156669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.156788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.156822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.157043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.157077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.157255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.157288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.157492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.157526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.157779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.157831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.157974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.158010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.158271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.158305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.158474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.158507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.158784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.158819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.159010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.159043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.159251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.159284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.159426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.159459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.159651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.159684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.159873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.159906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.160101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.160134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.160336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.160369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.160481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.160514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.160800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.160834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.161025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.161063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.161362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.161395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.161672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.161706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.161834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.161867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.162107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.162140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.162311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.162344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.162583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.162617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.162800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.162832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.163053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.163086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.163278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.163311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.163443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.163476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.163675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.163710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.163852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.163885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.164089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.164123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.164389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.164422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.164680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.164714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.164902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.164935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.165129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.165162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.165341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.165374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.165686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.165721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.165919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.165954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.166131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.166167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.166437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.166479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.166794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.166828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.166953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.166986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.167167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.167200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.167398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.167434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.167703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.167741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.167871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.167906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.168038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.168072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.168334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.168371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.168510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.168543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.168663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.168698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.168915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.168948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.168969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:15.225 [2024-12-14 16:49:45.169001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:15.225 [2024-12-14 16:49:45.169008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:15.225 [2024-12-14 16:49:45.169014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:15.225 [2024-12-14 16:49:45.169019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:15.225 [2024-12-14 16:49:45.169129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.169162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.169330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.169362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.169575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.169609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.169730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.169762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.170015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.170053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.170238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.170271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.170442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.170475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.170592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.170624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.170532] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:15.225 [2024-12-14 16:49:45.170738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.170640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:15.225 [2024-12-14 16:49:45.170746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:15.225 [2024-12-14 16:49:45.170771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.170748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:15.225 [2024-12-14 16:49:45.170981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.171012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.171179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.171211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.171421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.171456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.171633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.171668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.171850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.171883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.172018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.172051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.172240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.172272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.172394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.172433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.172696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.172730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.172862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.172895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.173087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.173120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.173254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.173287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.173469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.173502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.173624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.173659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.173829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.173862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.173983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.174016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.225 [2024-12-14 16:49:45.174126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.225 [2024-12-14 16:49:45.174159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.225 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.174350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.174383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.174620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.174655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.174831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.174864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.174982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.175015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.175134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.175168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.175366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.175399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.175530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.175575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.175835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.175868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.176015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.176047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.176273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.176306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.176437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.176471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.176664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.176699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.176938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.176970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.177150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.177184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.177442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.177476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.177647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.177681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.177876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.177910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.178118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.178170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.178395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.178443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.178738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.178775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.178895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.178928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.179071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.179104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.179270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.179303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.179473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.179506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.179755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.179790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.179958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.179991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.180299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.180332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.180516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.180549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.180825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.180859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.181029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.181062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.181365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.181397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.181524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.181568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.181755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.181789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.181915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.181947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.182054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.182087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.182270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.182303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.182502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.182534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.182675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.182709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.182848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.182880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.183077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.183111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.183374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.183408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.183606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.183641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.183756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.183789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.183967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.184001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.184186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.184225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.184394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.184428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.184698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.184731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.184860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.184894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.185005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.185037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.185244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.185277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.185431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.185463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.185679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.185713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.185893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.185926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.186131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.186164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.186342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.186375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.186543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.186586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.186756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.186789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.186907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.186939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.187200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.187235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.187404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.187437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.187619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.187654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.187838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.187872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.187985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.188019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.188133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.188166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.188352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.188385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.188648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.188681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.188808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.188841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.189017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.189052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.189273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.189307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.189544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.189588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.189825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.189859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.190117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.190158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.190331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.190365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.190585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.190620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.190829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.190864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.191056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.191090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.191321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.191355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.191490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.191523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.191751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.191787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.191892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.191925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.192111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.192144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.192328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.192362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.192479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.192512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.192657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.192692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.192870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.192904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.193227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.193282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.193554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.193601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.193856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.193888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.194008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.194040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.194231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.194265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.194436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.194469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.226 qpair failed and we were unable to recover it. 00:36:15.226 [2024-12-14 16:49:45.194736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.226 [2024-12-14 16:49:45.194771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.194951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.194984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.195104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.195137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.195309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.195342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.195456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.195489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.195674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.195708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.195835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.195868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.196041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.196082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.196267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.196301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.196468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.196501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.196702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.196736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.196863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.196897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.197021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.197055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.197222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.197255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.197376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.197409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.197657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.197693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.197821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.197855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.198092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.198125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.198236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.198269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.198542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.198591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.198712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.198746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.198895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.198930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.199052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.199085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.199323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.199357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.199567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.199601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.199715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.199750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.199951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.199986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.200090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.200122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.200305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.200337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.200542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.200587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.200702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.200735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.200996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.201031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.201316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.201350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.201534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.201576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.201811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.201858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.201972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.202006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.202138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.202171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.202337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.202369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.202569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.202603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.202887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.202919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.203039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.203071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.203200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.203233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.203496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.203529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.203712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.203746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.203865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.203898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.204006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.204039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.204278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.204311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.204622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.204665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.204870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.204904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.205123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.205156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.205418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.205451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.205622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.205656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.205772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.205805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.205946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.205979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.206088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.206121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.206315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.206349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.206604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.206640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.206926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.206961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.207086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.207120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.207239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.207272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.207493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.207528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.207746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.207780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.207970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.208002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.208183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.208215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.208383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.208415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.208598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.208633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.208745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.208777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.208887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.208920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.209110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.209143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.209327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.209360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.209546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.209588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.209779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.209812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.210067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.210100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.210218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.210250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.210571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.210623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.210827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.210860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.211078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.211112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.211324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.211357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.211483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.211517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.211643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.211677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.211808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.211841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.212034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.212067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.212388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.212422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.212538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.212584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.212775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.212807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.212992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.213025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.213292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.213326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.213523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.213579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.213705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.213738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.213990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.214024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.214326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.214359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.214467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.214501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.214756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.214791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.215063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.215098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.215319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.215355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.227 [2024-12-14 16:49:45.215614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.227 [2024-12-14 16:49:45.215650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.227 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.215892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.215926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.216135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.216168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.216343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.216377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.216573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.216610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.216745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.216779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.216979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.217012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.217130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.217165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.217290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.217323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.217587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.217652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.217844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.217879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.217990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.218021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.218263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.218296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.218467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.218500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.218649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.218682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.218804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.218837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.218968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.219001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.219128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.219161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.219348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.219381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.219537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.219602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.219746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.219790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.219930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.219971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.220114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.220147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.220389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.220422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.220593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.220628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.220807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.220839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.221045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.221078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.221301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.221335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.221577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.221610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.221736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.221769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.221874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.221908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.222016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.222046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.222261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.222295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.222427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.222460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.222648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.222682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.222792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.222824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.223065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.223098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.223217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.223250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.223441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.223473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.223668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.223703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.223894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.223927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.224051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.224085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.224207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.224239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.224417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.224450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.224586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.224621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.224743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.224775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.225001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.225036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.225224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.225258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.225518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.225551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.225778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.225813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.225944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.225978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.226200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.226234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.226449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.226482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.226664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.226699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.226868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.226901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.227098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.227131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.227241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.227274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.227453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.227486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.227603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.227638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.227816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.227856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.227990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.228023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.228140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.228173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.228453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.228489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.228726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.228761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.228978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.229011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.229211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.229245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.229361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.229393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.229639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.229673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.229856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.229889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.230025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.230058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.230374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.230407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.230604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.230639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.230763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.230796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.230920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.230953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.231076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.231109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.231317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.231349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.231474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.231507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.231656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.231692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.231817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.231849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.231962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.231995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.232248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.232281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.232472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.232505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.232652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.232687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.232871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.232905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.233026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.233059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.233171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.233202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.228 qpair failed and we were unable to recover it. 00:36:15.228 [2024-12-14 16:49:45.233332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.228 [2024-12-14 16:49:45.233366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.233609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.233645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.233863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.233896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.234073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.234105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.234230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.234263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.234436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.234468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.234662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.234696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.234873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.234905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.235027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.235060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.235274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.235306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.235486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.235519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.235674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.235708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.235844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.235877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.236138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.236178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.236376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.236409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.236583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.236617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.236807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.236840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.236971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.237004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.237126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.237159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.237365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.237398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.237589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.237623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.237754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.237787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.237959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.237991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.238226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.238258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.238426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.238459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.238631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.238664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.238785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.238818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.238974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.239007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.239138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.239171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.239379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.239413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.239534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.239576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.239704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.239736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.239879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.239912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.240054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.240086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.240238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.240270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.240397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.240430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.240693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.240727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.240896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.240928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.241058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.241091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.241309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.241343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.241476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.241509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.241657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.241692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.241840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.241872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.242109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.242141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.242266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.242298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.242478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.242510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.242666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.242699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.242879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.242912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.243082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.243115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.243338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.243371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.243647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.243682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.243799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.243832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.243942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.243973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.244077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.244115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.244300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.244332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.244439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.244471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.244730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.244764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.244885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.244918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.245039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.245072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.245380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.245412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.245586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.245622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.245912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.245945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.246068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.246101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.246221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.246254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.246457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.246489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.246678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.246712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.246836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.246867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.247061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.247093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.247382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.247416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.247703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.247737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.247996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.248028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.248171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.248204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.248375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.248406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.248646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.248680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.248866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.248897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.249036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.249068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.249289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.249322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.249514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.249546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.249762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.249796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.249977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.250009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.250155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.250187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.250379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.250411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.250582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.250615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.250734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.250766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.250895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.250927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.251096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.251129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.251316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.251348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.251469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.251502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.251756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.251790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.251975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.252007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.252136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.252169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.252292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.252325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.252595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.252629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.252742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.252781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.253003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.253035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.253171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.253203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.253455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.253488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.253651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.253684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.253872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.253905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.254092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.254124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.254305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.254336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.229 qpair failed and we were unable to recover it. 00:36:15.229 [2024-12-14 16:49:45.254517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.229 [2024-12-14 16:49:45.254549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.254662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.254695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.254842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.254874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.255044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.255077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.255251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.255284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.255482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.255513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.255734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.255768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.255872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.255904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.256032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.256064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.256318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.256351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.256457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.256489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.256673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.256706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.256830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.256862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.257033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.257065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.257251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.257283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.257569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.257603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.257812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.257855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:15.230 [2024-12-14 16:49:45.257981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.258015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.258254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.258295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:15.230 [2024-12-14 16:49:45.258537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.258585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:15.230 [2024-12-14 16:49:45.258818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.258852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.230 [2024-12-14 16:49:45.258984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.259017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.259318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.259351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.259466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.259499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.259685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.259719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.259846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.259880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.260050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.260084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.260268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.260304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.260488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.260519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.260637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.260669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.260795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.260838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.261097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.261127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.261332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.261365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.261506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.261539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.261742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.261774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.261919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.261950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.262079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.262111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.262251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.262283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.262397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.262428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.262567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.262602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.262708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.262740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.262848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.262880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.263086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.263130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.263362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.263395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.263645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.263693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.263914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.263947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.264067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.264099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.264300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.264333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.264570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.264605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.264793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.264825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.264950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.264982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.265124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.265157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.265379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.265412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.265536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.265581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.265735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.265771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.265898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.265929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.266061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.266092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.266381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.266419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.266600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.266634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.266776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.266808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.266926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.266959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.267134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.267167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.267348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.267380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.267569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.267602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.267788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.267820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.267946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.267978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.268215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.268247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.268383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.268415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.268537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.268580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.268692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.268725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.268896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.268929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.269042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.269079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.269291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.269325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.269451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.269482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.269600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.269634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.269756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.269787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.269907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.269938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.270063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.230 [2024-12-14 16:49:45.270094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.230 qpair failed and we were unable to recover it. 00:36:15.230 [2024-12-14 16:49:45.270291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.270323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.270501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.270533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.270649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.270681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.270787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.270818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.270954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.270985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.271094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.271125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.271334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.271372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.271636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.271669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.271786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.271820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.271982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.272012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.272220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.272251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.272473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.272505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.272715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.272748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.272884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.272915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.273095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.273127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.273240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.273270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.273383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.273414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.273605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.273638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.273757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.273788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.273924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.273955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.274094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.274126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.274307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.274338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.274454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.274485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.274653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.274685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.274818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.274850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.274962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.274993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.275259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.275291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.275413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.275445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.275617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.275648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.275775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.275806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.275916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.275948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.276095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.276126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.276254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.276287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.276424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.276456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.276579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.276611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.276750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.276781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.276915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.276946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.277050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.277082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.277211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.277242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.277421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.277452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.277577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.277611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.277741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.277772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.277901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.277932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.278054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.278084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.278205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.278236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.278403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.278434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.278550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.278602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.278744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.278775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.278915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.278946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.279056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.279087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.279200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.279231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.279340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.279371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.279483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.279514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.279666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.279699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.279811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.279842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.279964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.279996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.280118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.280149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.280271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.280304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.280485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.280516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.280662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.280696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.280816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.280848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.280965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.280997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.281106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.281138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.281285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.281316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.281491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.281522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.281733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.281767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.281887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.281919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.282033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.282065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.282287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.282318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.282434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.282465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.282624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.282657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.282776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.282806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.282930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.282960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedec000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.283194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.283240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.283427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.283464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.283646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.283680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.283811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.283843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.284036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.284068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.284359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.284391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.284503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.284536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.284676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.284708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.284819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.284851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.284953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.284983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.285212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.285243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.285364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.285396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.285580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.285612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.285744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.285776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.285958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.285990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.286117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.286148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.286282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.286314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.286539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.286584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.286788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.286820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.286956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.286988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.287157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.287189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.287306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.287337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.287504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.287537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.287748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.287781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.287898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.287929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.231 [2024-12-14 16:49:45.288046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.231 [2024-12-14 16:49:45.288077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.231 qpair failed and we were unable to recover it. 00:36:15.232 [2024-12-14 16:49:45.288240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.232 [2024-12-14 16:49:45.288272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.232 qpair failed and we were unable to recover it. 00:36:15.232 [2024-12-14 16:49:45.288399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.232 [2024-12-14 16:49:45.288442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.232 qpair failed and we were unable to recover it. 00:36:15.232 [2024-12-14 16:49:45.288572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.232 [2024-12-14 16:49:45.288605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.232 qpair failed and we were unable to recover it. 00:36:15.232 [2024-12-14 16:49:45.288781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.232 [2024-12-14 16:49:45.288815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.232 qpair failed and we were unable to recover it. 00:36:15.232 [2024-12-14 16:49:45.289016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.232 [2024-12-14 16:49:45.289049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.232 qpair failed and we were unable to recover it. 00:36:15.232 [2024-12-14 16:49:45.289176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.232 [2024-12-14 16:49:45.289208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.232 qpair failed and we were unable to recover it. 00:36:15.232 [2024-12-14 16:49:45.289376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.232 [2024-12-14 16:49:45.289408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.232 qpair failed and we were unable to recover it. 00:36:15.232 [2024-12-14 16:49:45.289571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.232 [2024-12-14 16:49:45.289603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.232 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.289724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.289754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.289871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.289902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.290071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.290102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.290336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.290368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.290520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.290551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.290686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.290719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.290848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.290880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.291097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.291129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.291250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.291282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.291449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.291481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.291723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.291758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.291928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.291959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.292212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.292244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.292385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.292417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.292527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.292569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.292699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.292731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-12-14 16:49:45.292867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.523 [2024-12-14 16:49:45.292900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.293026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.293058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.293164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.293196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.293385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.293416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.293671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.293705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.524 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.293838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.293878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.294021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.294063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:15.524 [2024-12-14 16:49:45.294189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.294221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.524 [2024-12-14 16:49:45.294496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.294529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.294674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.294708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.524 [2024-12-14 16:49:45.294824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.294856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.294978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.295010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.295242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.295275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.295452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.295484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.295739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.295774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.295895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.295927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.296125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.296158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.296340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.296372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.296488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.296521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.296659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.296691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.296821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.296852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.296980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.297015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.297365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.297398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.297517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.297549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.297778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.297812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.297929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.297962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.298135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.298166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.298439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.298471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.298594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.298627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.298750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.298795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.298947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.298981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.299093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.299125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.299247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.299279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.299446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.299478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.299675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.299709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.299843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.299880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.300013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.300045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.300212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.300244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.300464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.300496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.300668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.300702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-12-14 16:49:45.300813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.524 [2024-12-14 16:49:45.300845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.300966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.300998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.301166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.301198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.301372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.301404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.301576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.301618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.301810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.301842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.301961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.301993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.302213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.302244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.302350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.302383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.302579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.302612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.302801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.302833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.303008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.303040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.303263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.303295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.303410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.303442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.303572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.303606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.303846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.303878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.304052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.304090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.304296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.304327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.304586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.304620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.304765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.304797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.304931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.304963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.305214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.305246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.305362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.305393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.305587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.305620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.305822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.305854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.305990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.306022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.306132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.306163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.306414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.306445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.306658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.306691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.306866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.306897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.307052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.307084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.307285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.307317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.307504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.307536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.307677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.307709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.307843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.307875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.308054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.308085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.308268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.308300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.308612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.308647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.308833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.308864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.309050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.309081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.309284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.309315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.309431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.309463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.309651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.525 [2024-12-14 16:49:45.309685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-12-14 16:49:45.309798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.309831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.310032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.310065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.310352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.310383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.310685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.310718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.310862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.310893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.311101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.311134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.311241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.311272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.311422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.311454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.311639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.311672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.311794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.311825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.311931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.311963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.312090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.312121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.312321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.312352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.312569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.312602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.312730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.312766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.312958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.312989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.313178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.313210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.313333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.313363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.313606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.313639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.313757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.313788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.313923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.313954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.314143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.314174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.314429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.314460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.314628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.314661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.314767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.314798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.314904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.314935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.315044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.315075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.315196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.315227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.315427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.315459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.315590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.315622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.315815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.315846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.315952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.315983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.316090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.316121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.316387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.316419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.316593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.316626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.316795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.316827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.316997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.317028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.317172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.317203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.317327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.317358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.317546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.317590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.317709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.317740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.317932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.317969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.526 qpair failed and we were unable to recover it. 00:36:15.526 [2024-12-14 16:49:45.318096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.526 [2024-12-14 16:49:45.318127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.318336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.318367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.318608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.318641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.318856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.318888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.319059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.319090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.319279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.319311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.319482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.319515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.319791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.319824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.319992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.320024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.320239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.320271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.320526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.320566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.320672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.320702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.320911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.320942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.321277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.321325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.321639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.321679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.321944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.321978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.322169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.322202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.322467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.322500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.322774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.322808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.322924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.322957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.323196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.323227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.323501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.323533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.323735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.323775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.323901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.323932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.324194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.324227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.324461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.324494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.324716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.324755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.324933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.324966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.325135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.325166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.325355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.325386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.325563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.325596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.325817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.325848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.325967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.325998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.326205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.326236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.326451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.326481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.326672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.527 [2024-12-14 16:49:45.326705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.527 qpair failed and we were unable to recover it. 00:36:15.527 [2024-12-14 16:49:45.326813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.326845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.327100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.327131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.327406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.327437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.327623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.327655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcd0 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.327877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.327915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.328096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.328129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.328365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.328397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 Malloc0 00:36:15.528 [2024-12-14 16:49:45.328605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.328639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.328924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.328956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.329165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.329196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.528 [2024-12-14 16:49:45.329374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.329407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.329517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.329549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:15.528 [2024-12-14 16:49:45.329728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.329761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.329948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.329980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.528 [2024-12-14 16:49:45.330173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.330206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.528 [2024-12-14 16:49:45.330480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.330512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.330741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.330775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.330899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.330932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.331124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.331155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.331390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.331422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.331673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.331706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.331873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.331905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.332113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.332145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.332334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.332366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.332478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.332510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.332734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.332767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.332898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.332929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.333194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.333226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.333420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.333451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.333627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.333660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.333851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.333882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.334083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.334114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.334374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.334405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.334679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.334712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.334924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.334957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.335170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.335202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.335480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.335512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.335672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.335706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.335842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.335874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.336043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.528 [2024-12-14 16:49:45.336075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.528 [2024-12-14 16:49:45.336085] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.528 qpair failed and we were unable to recover it. 00:36:15.528 [2024-12-14 16:49:45.336359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.336392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.336655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.336687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.336831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.336863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.336998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.337029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.337211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.337241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.337468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.337499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.337708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.337742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.337911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.337942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.338125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.338156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.338428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.338459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.338635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.338668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.338867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.338899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.339091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.339122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.339392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.339423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.339620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.339653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.339821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.339864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.340037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.340069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.340176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.340208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.340479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.340511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.340681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.340713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.340843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.340875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.341051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.341083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.341191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.341222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.341389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.341421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.341674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.341707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.341826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.341857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.342052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.342084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.342337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.342368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.342481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.342513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.342707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.342741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.342857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.342888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.343072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.343103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.343221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.343252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.343491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.343523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.343722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.343755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.343879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.343910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.344107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.344138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.344351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.344383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.344586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.344619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.529 [2024-12-14 16:49:45.344853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.344885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 [2024-12-14 16:49:45.345143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.529 [2024-12-14 16:49:45.345175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.529 qpair failed and we were unable to recover it. 00:36:15.529 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:15.529 [2024-12-14 16:49:45.345356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.345387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.530 [2024-12-14 16:49:45.345596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.345631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.345801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.345833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.346003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.346034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.346204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.346235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.346423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.346454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.346713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.346746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.346917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.346948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.347186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.347217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.347471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.347502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.347766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.347797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.348080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.348111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.348248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.348285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.348452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.348483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.348595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.348628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.348799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.348830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.349096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.349127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.349362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.349393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.349656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.349687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.349904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.349934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.350103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.350133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.350345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.350376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.350611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.350642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.350879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.350911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.351081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.351111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.351297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.351327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.351589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.351622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.351810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.351840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.352029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.352060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.352326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.352357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.352607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.352639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.530 [2024-12-14 16:49:45.352833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.352864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.353121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.353152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:15.530 [2024-12-14 16:49:45.353316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.353346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.530 [2024-12-14 16:49:45.353538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.353578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.353768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.353800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.530 [2024-12-14 16:49:45.353927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.353958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.354194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.354231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.354498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.354529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf8000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.354712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.530 [2024-12-14 16:49:45.354764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.530 qpair failed and we were unable to recover it. 00:36:15.530 [2024-12-14 16:49:45.355037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.355069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.355245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.355277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.355467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.355499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.355626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.355659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.355895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.355927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.356226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.356257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.356467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.356499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.356684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.356717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.356845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.356877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.357065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.357097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.357293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.357325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.357520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.357553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.357758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.357789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.357905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.357937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.358228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.358261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.358542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.358585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.358709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.358741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.358932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.358963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.359186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.359217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.359461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.359492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.359799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.359833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.360040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.360071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.360242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.360273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.360554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.360596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.531 [2024-12-14 16:49:45.360865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.360899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.361127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.361159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:15.531 [2024-12-14 16:49:45.361325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.361357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.531 [2024-12-14 16:49:45.361592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.361625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.531 [2024-12-14 16:49:45.361883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.361915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.362204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.362235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.362467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.362499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.362780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.362813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.363078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.363110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.363392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.363424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.363595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.363628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.531 [2024-12-14 16:49:45.363879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.531 [2024-12-14 16:49:45.363916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.531 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.364155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.532 [2024-12-14 16:49:45.364186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fedf0000b90 with addr=10.0.0.2, port=4420 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.364282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.532 [2024-12-14 16:49:45.366765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.366911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.366955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.366977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.366996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 [2024-12-14 16:49:45.367049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.532 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:15.532 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.532 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:15.532 [2024-12-14 16:49:45.376687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.376779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.376813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.376831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.376848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.532 [2024-12-14 16:49:45.376888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 16:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1209297 00:36:15.532 [2024-12-14 16:49:45.386671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.386740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.386763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.386776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.386787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 [2024-12-14 16:49:45.386818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.396717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.396832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.396847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.396855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.396863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 [2024-12-14 16:49:45.396882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.406642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.406702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.406715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.406722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.406729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 [2024-12-14 16:49:45.406745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.416651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.416706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.416719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.416726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.416732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 [2024-12-14 16:49:45.416747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.426743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.426797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.426811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.426818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.426825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 [2024-12-14 16:49:45.426841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.436723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.436784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.436798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.436804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.436811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 [2024-12-14 16:49:45.436826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.446775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.446844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.446858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.446865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.446871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 [2024-12-14 16:49:45.446886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.456793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.456847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.456860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.456866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.456872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 [2024-12-14 16:49:45.456888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.466813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.466865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.466878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.466885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.466891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 [2024-12-14 16:49:45.466906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.476830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.476901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.476914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.476924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.476929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.532 [2024-12-14 16:49:45.476944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.532 qpair failed and we were unable to recover it. 00:36:15.532 [2024-12-14 16:49:45.486836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.532 [2024-12-14 16:49:45.486893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.532 [2024-12-14 16:49:45.486908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.532 [2024-12-14 16:49:45.486915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.532 [2024-12-14 16:49:45.486922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.486938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.533 [2024-12-14 16:49:45.496871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.533 [2024-12-14 16:49:45.496925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.533 [2024-12-14 16:49:45.496939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.533 [2024-12-14 16:49:45.496947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.533 [2024-12-14 16:49:45.496953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.496970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.533 [2024-12-14 16:49:45.506894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.533 [2024-12-14 16:49:45.506948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.533 [2024-12-14 16:49:45.506961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.533 [2024-12-14 16:49:45.506968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.533 [2024-12-14 16:49:45.506974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.506989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.533 [2024-12-14 16:49:45.516936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.533 [2024-12-14 16:49:45.516992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.533 [2024-12-14 16:49:45.517005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.533 [2024-12-14 16:49:45.517012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.533 [2024-12-14 16:49:45.517018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.517036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.533 [2024-12-14 16:49:45.526964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.533 [2024-12-14 16:49:45.527029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.533 [2024-12-14 16:49:45.527042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.533 [2024-12-14 16:49:45.527050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.533 [2024-12-14 16:49:45.527056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.527071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.533 [2024-12-14 16:49:45.536987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.533 [2024-12-14 16:49:45.537039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.533 [2024-12-14 16:49:45.537052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.533 [2024-12-14 16:49:45.537058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.533 [2024-12-14 16:49:45.537065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.537080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.533 [2024-12-14 16:49:45.547049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.533 [2024-12-14 16:49:45.547100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.533 [2024-12-14 16:49:45.547114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.533 [2024-12-14 16:49:45.547121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.533 [2024-12-14 16:49:45.547128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.547143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.533 [2024-12-14 16:49:45.557076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.533 [2024-12-14 16:49:45.557151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.533 [2024-12-14 16:49:45.557164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.533 [2024-12-14 16:49:45.557171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.533 [2024-12-14 16:49:45.557177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.557192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.533 [2024-12-14 16:49:45.567103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.533 [2024-12-14 16:49:45.567181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.533 [2024-12-14 16:49:45.567195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.533 [2024-12-14 16:49:45.567202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.533 [2024-12-14 16:49:45.567208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.567223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.533 [2024-12-14 16:49:45.577104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.533 [2024-12-14 16:49:45.577152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.533 [2024-12-14 16:49:45.577165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.533 [2024-12-14 16:49:45.577172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.533 [2024-12-14 16:49:45.577178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.577192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.533 [2024-12-14 16:49:45.587130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.533 [2024-12-14 16:49:45.587186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.533 [2024-12-14 16:49:45.587199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.533 [2024-12-14 16:49:45.587205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.533 [2024-12-14 16:49:45.587212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.587226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.533 [2024-12-14 16:49:45.597163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.533 [2024-12-14 16:49:45.597237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.533 [2024-12-14 16:49:45.597250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.533 [2024-12-14 16:49:45.597257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.533 [2024-12-14 16:49:45.597264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.533 [2024-12-14 16:49:45.597279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.533 qpair failed and we were unable to recover it. 00:36:15.792 [2024-12-14 16:49:45.607226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.792 [2024-12-14 16:49:45.607278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.792 [2024-12-14 16:49:45.607294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.792 [2024-12-14 16:49:45.607301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.792 [2024-12-14 16:49:45.607307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.792 [2024-12-14 16:49:45.607322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.792 qpair failed and we were unable to recover it. 00:36:15.792 [2024-12-14 16:49:45.617243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.792 [2024-12-14 16:49:45.617301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.792 [2024-12-14 16:49:45.617314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.792 [2024-12-14 16:49:45.617321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.792 [2024-12-14 16:49:45.617328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.792 [2024-12-14 16:49:45.617342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.792 qpair failed and we were unable to recover it. 00:36:15.792 [2024-12-14 16:49:45.627281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.792 [2024-12-14 16:49:45.627346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.792 [2024-12-14 16:49:45.627359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.792 [2024-12-14 16:49:45.627366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.792 [2024-12-14 16:49:45.627372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.792 [2024-12-14 16:49:45.627387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.792 qpair failed and we were unable to recover it. 00:36:15.792 [2024-12-14 16:49:45.637293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.792 [2024-12-14 16:49:45.637348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.792 [2024-12-14 16:49:45.637361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.792 [2024-12-14 16:49:45.637368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.792 [2024-12-14 16:49:45.637374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.637389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.647354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.647409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.647436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.647444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.647453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.647474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.657350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.657407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.657422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.657429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.657435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.657451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.667355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.667436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.667449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.667456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.667462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.667477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.677402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.677458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.677471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.677478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.677484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.677500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.687422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.687477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.687491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.687498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.687505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.687520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.697449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.697503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.697517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.697524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.697530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.697546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.707474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.707525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.707538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.707545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.707551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.707571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.717518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.717587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.717601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.717608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.717615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.717630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.727534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.727595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.727608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.727616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.727622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.727638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.737564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.737620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.737637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.737645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.737661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.737679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.747599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.747656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.747670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.747677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.747683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.747698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.757635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.757705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.757719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.757726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.757732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.757747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.767656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.767710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.767723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.767730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.767736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.767751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.777680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.777737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.777750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.777757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.777767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.777781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.787703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.787758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.787772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.787778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.787785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.787800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.797791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.797857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.797870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.797877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.797883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.797898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.807780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.807846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.807859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.807866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.807872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.807886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.817797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.817885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.817898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.817905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.817911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.793 [2024-12-14 16:49:45.817925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.793 qpair failed and we were unable to recover it. 00:36:15.793 [2024-12-14 16:49:45.827761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.793 [2024-12-14 16:49:45.827814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.793 [2024-12-14 16:49:45.827827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.793 [2024-12-14 16:49:45.827834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.793 [2024-12-14 16:49:45.827840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.794 [2024-12-14 16:49:45.827854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.794 qpair failed and we were unable to recover it. 00:36:15.794 [2024-12-14 16:49:45.837845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.794 [2024-12-14 16:49:45.837902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.794 [2024-12-14 16:49:45.837915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.794 [2024-12-14 16:49:45.837922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.794 [2024-12-14 16:49:45.837928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.794 [2024-12-14 16:49:45.837943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.794 qpair failed and we were unable to recover it. 00:36:15.794 [2024-12-14 16:49:45.847851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.794 [2024-12-14 16:49:45.847905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.794 [2024-12-14 16:49:45.847918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.794 [2024-12-14 16:49:45.847924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.794 [2024-12-14 16:49:45.847930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.794 [2024-12-14 16:49:45.847945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.794 qpair failed and we were unable to recover it. 00:36:15.794 [2024-12-14 16:49:45.857912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.794 [2024-12-14 16:49:45.857965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.794 [2024-12-14 16:49:45.857977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.794 [2024-12-14 16:49:45.857984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.794 [2024-12-14 16:49:45.857990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.794 [2024-12-14 16:49:45.858004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.794 qpair failed and we were unable to recover it. 00:36:15.794 [2024-12-14 16:49:45.867879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.794 [2024-12-14 16:49:45.867930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.794 [2024-12-14 16:49:45.867946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.794 [2024-12-14 16:49:45.867953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.794 [2024-12-14 16:49:45.867959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:15.794 [2024-12-14 16:49:45.867973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:15.794 qpair failed and we were unable to recover it. 00:36:16.053 [2024-12-14 16:49:45.877961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.053 [2024-12-14 16:49:45.878024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.053 [2024-12-14 16:49:45.878036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.053 [2024-12-14 16:49:45.878043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.053 [2024-12-14 16:49:45.878049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.053 [2024-12-14 16:49:45.878062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.053 qpair failed and we were unable to recover it. 00:36:16.053 [2024-12-14 16:49:45.888003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.053 [2024-12-14 16:49:45.888060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.053 [2024-12-14 16:49:45.888073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.053 [2024-12-14 16:49:45.888079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.053 [2024-12-14 16:49:45.888085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.053 [2024-12-14 16:49:45.888099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.053 qpair failed and we were unable to recover it. 00:36:16.053 [2024-12-14 16:49:45.898037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.053 [2024-12-14 16:49:45.898090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.053 [2024-12-14 16:49:45.898103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.053 [2024-12-14 16:49:45.898109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.053 [2024-12-14 16:49:45.898115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.053 [2024-12-14 16:49:45.898130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.053 qpair failed and we were unable to recover it. 00:36:16.053 [2024-12-14 16:49:45.908081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.053 [2024-12-14 16:49:45.908139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.053 [2024-12-14 16:49:45.908152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.053 [2024-12-14 16:49:45.908162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.053 [2024-12-14 16:49:45.908168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.053 [2024-12-14 16:49:45.908182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.053 qpair failed and we were unable to recover it. 00:36:16.053 [2024-12-14 16:49:45.918085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.053 [2024-12-14 16:49:45.918140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.053 [2024-12-14 16:49:45.918154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.053 [2024-12-14 16:49:45.918160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.053 [2024-12-14 16:49:45.918166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.053 [2024-12-14 16:49:45.918180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.053 qpair failed and we were unable to recover it. 00:36:16.053 [2024-12-14 16:49:45.928115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.053 [2024-12-14 16:49:45.928173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.053 [2024-12-14 16:49:45.928185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.053 [2024-12-14 16:49:45.928192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.053 [2024-12-14 16:49:45.928197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.053 [2024-12-14 16:49:45.928211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.053 qpair failed and we were unable to recover it. 00:36:16.053 [2024-12-14 16:49:45.938146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.053 [2024-12-14 16:49:45.938198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.053 [2024-12-14 16:49:45.938211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.053 [2024-12-14 16:49:45.938218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.053 [2024-12-14 16:49:45.938223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:45.938237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:45.948237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:45.948292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:45.948305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:45.948312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:45.948317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:45.948335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:45.958206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:45.958264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:45.958277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:45.958283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:45.958289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:45.958303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:45.968215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:45.968289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:45.968303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:45.968309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:45.968315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:45.968330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:45.978244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:45.978323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:45.978336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:45.978343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:45.978348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:45.978363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:45.988271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:45.988318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:45.988332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:45.988339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:45.988345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:45.988360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:45.998245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:45.998303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:45.998316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:45.998323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:45.998329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:45.998343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:46.008346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:46.008401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:46.008414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:46.008420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:46.008426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:46.008440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:46.018373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:46.018421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:46.018434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:46.018440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:46.018445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:46.018460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:46.028407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:46.028469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:46.028481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:46.028488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:46.028493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:46.028507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:46.038457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:46.038525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:46.038538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:46.038547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:46.038553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:46.038571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:46.048475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:46.048531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:46.048544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:46.048551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:46.048560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:46.048575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:46.058525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:46.058584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:46.058597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:46.058604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:46.058610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:46.058625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:46.068523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.054 [2024-12-14 16:49:46.068578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.054 [2024-12-14 16:49:46.068591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.054 [2024-12-14 16:49:46.068598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.054 [2024-12-14 16:49:46.068604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.054 [2024-12-14 16:49:46.068618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.054 qpair failed and we were unable to recover it. 00:36:16.054 [2024-12-14 16:49:46.078531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.055 [2024-12-14 16:49:46.078590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.055 [2024-12-14 16:49:46.078603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.055 [2024-12-14 16:49:46.078609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.055 [2024-12-14 16:49:46.078615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.055 [2024-12-14 16:49:46.078632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.055 qpair failed and we were unable to recover it. 00:36:16.055 [2024-12-14 16:49:46.088588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.055 [2024-12-14 16:49:46.088638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.055 [2024-12-14 16:49:46.088650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.055 [2024-12-14 16:49:46.088656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.055 [2024-12-14 16:49:46.088662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.055 [2024-12-14 16:49:46.088676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.055 qpair failed and we were unable to recover it. 00:36:16.055 [2024-12-14 16:49:46.098634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.055 [2024-12-14 16:49:46.098704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.055 [2024-12-14 16:49:46.098716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.055 [2024-12-14 16:49:46.098723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.055 [2024-12-14 16:49:46.098728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.055 [2024-12-14 16:49:46.098743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.055 qpair failed and we were unable to recover it. 00:36:16.055 [2024-12-14 16:49:46.108671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.055 [2024-12-14 16:49:46.108734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.055 [2024-12-14 16:49:46.108747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.055 [2024-12-14 16:49:46.108753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.055 [2024-12-14 16:49:46.108759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.055 [2024-12-14 16:49:46.108773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.055 qpair failed and we were unable to recover it. 00:36:16.055 [2024-12-14 16:49:46.118660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.055 [2024-12-14 16:49:46.118714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.055 [2024-12-14 16:49:46.118727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.055 [2024-12-14 16:49:46.118733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.055 [2024-12-14 16:49:46.118739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.055 [2024-12-14 16:49:46.118753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.055 qpair failed and we were unable to recover it. 00:36:16.055 [2024-12-14 16:49:46.128673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.055 [2024-12-14 16:49:46.128727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.055 [2024-12-14 16:49:46.128740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.055 [2024-12-14 16:49:46.128746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.055 [2024-12-14 16:49:46.128751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.055 [2024-12-14 16:49:46.128765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.055 qpair failed and we were unable to recover it. 00:36:16.314 [2024-12-14 16:49:46.138720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.314 [2024-12-14 16:49:46.138772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.314 [2024-12-14 16:49:46.138784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.314 [2024-12-14 16:49:46.138790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.314 [2024-12-14 16:49:46.138796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.314 [2024-12-14 16:49:46.138810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.314 qpair failed and we were unable to recover it. 00:36:16.314 [2024-12-14 16:49:46.148765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.314 [2024-12-14 16:49:46.148848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.314 [2024-12-14 16:49:46.148861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.314 [2024-12-14 16:49:46.148867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.314 [2024-12-14 16:49:46.148873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.314 [2024-12-14 16:49:46.148886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.314 qpair failed and we were unable to recover it. 00:36:16.314 [2024-12-14 16:49:46.158733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.314 [2024-12-14 16:49:46.158787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.314 [2024-12-14 16:49:46.158799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.314 [2024-12-14 16:49:46.158806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.314 [2024-12-14 16:49:46.158811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.314 [2024-12-14 16:49:46.158826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.314 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.168793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.168845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.168861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.168867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.168873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.168887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.178947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.179053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.179065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.179071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.179077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.179090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.188933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.188990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.189002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.189008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.189014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.189029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.198918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.198980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.198992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.198999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.199005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.199019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.208947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.209011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.209024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.209030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.209041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.209055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.218933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.218985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.218997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.219003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.219009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.219023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.228953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.229010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.229022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.229028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.229034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.229048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.238992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.239054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.239068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.239075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.239080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.239095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.249016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.249073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.249086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.249092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.249098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.249112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.259070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.259122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.259135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.259141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.259147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.259161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.269062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.269137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.269150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.269156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.269161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.269175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.279097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.279150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.279162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.279168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.279174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.279188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.289117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.289167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.315 [2024-12-14 16:49:46.289179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.315 [2024-12-14 16:49:46.289185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.315 [2024-12-14 16:49:46.289191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.315 [2024-12-14 16:49:46.289205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.315 qpair failed and we were unable to recover it. 00:36:16.315 [2024-12-14 16:49:46.299149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.315 [2024-12-14 16:49:46.299198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.316 [2024-12-14 16:49:46.299214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.316 [2024-12-14 16:49:46.299220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.316 [2024-12-14 16:49:46.299226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.316 [2024-12-14 16:49:46.299240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.316 qpair failed and we were unable to recover it. 00:36:16.316 [2024-12-14 16:49:46.309200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.316 [2024-12-14 16:49:46.309256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.316 [2024-12-14 16:49:46.309269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.316 [2024-12-14 16:49:46.309275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.316 [2024-12-14 16:49:46.309281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.316 [2024-12-14 16:49:46.309295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.316 qpair failed and we were unable to recover it. 00:36:16.316 [2024-12-14 16:49:46.319234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.316 [2024-12-14 16:49:46.319288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.316 [2024-12-14 16:49:46.319301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.316 [2024-12-14 16:49:46.319307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.316 [2024-12-14 16:49:46.319313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.316 [2024-12-14 16:49:46.319327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.316 qpair failed and we were unable to recover it. 00:36:16.316 [2024-12-14 16:49:46.329250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.316 [2024-12-14 16:49:46.329304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.316 [2024-12-14 16:49:46.329317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.316 [2024-12-14 16:49:46.329323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.316 [2024-12-14 16:49:46.329329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.316 [2024-12-14 16:49:46.329344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.316 qpair failed and we were unable to recover it. 00:36:16.316 [2024-12-14 16:49:46.339273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.316 [2024-12-14 16:49:46.339325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.316 [2024-12-14 16:49:46.339338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.316 [2024-12-14 16:49:46.339345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.316 [2024-12-14 16:49:46.339354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.316 [2024-12-14 16:49:46.339368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.316 qpair failed and we were unable to recover it. 00:36:16.316 [2024-12-14 16:49:46.349327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.316 [2024-12-14 16:49:46.349412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.316 [2024-12-14 16:49:46.349425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.316 [2024-12-14 16:49:46.349431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.316 [2024-12-14 16:49:46.349436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.316 [2024-12-14 16:49:46.349451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.316 qpair failed and we were unable to recover it. 00:36:16.316 [2024-12-14 16:49:46.359355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.316 [2024-12-14 16:49:46.359435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.316 [2024-12-14 16:49:46.359448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.316 [2024-12-14 16:49:46.359454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.316 [2024-12-14 16:49:46.359460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.316 [2024-12-14 16:49:46.359475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.316 qpair failed and we were unable to recover it. 00:36:16.316 [2024-12-14 16:49:46.369372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.316 [2024-12-14 16:49:46.369425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.316 [2024-12-14 16:49:46.369437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.316 [2024-12-14 16:49:46.369444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.316 [2024-12-14 16:49:46.369450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.316 [2024-12-14 16:49:46.369464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.316 qpair failed and we were unable to recover it. 00:36:16.316 [2024-12-14 16:49:46.379352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.316 [2024-12-14 16:49:46.379450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.316 [2024-12-14 16:49:46.379462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.316 [2024-12-14 16:49:46.379468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.316 [2024-12-14 16:49:46.379473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.316 [2024-12-14 16:49:46.379488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.316 qpair failed and we were unable to recover it. 00:36:16.316 [2024-12-14 16:49:46.389488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.316 [2024-12-14 16:49:46.389589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.316 [2024-12-14 16:49:46.389602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.316 [2024-12-14 16:49:46.389608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.316 [2024-12-14 16:49:46.389613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.316 [2024-12-14 16:49:46.389628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.316 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 16:49:46.399451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 16:49:46.399508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 16:49:46.399520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 16:49:46.399526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 16:49:46.399532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.575 [2024-12-14 16:49:46.399547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 16:49:46.409431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 16:49:46.409485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 16:49:46.409497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 16:49:46.409504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 16:49:46.409509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.575 [2024-12-14 16:49:46.409523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 16:49:46.419511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 16:49:46.419573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 16:49:46.419586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 16:49:46.419592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 16:49:46.419598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.575 [2024-12-14 16:49:46.419614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 16:49:46.429530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 16:49:46.429591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 16:49:46.429604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 16:49:46.429610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 16:49:46.429616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.575 [2024-12-14 16:49:46.429630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 16:49:46.439498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 16:49:46.439562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 16:49:46.439575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 16:49:46.439581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 16:49:46.439587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.575 [2024-12-14 16:49:46.439602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 16:49:46.449518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 16:49:46.449576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 16:49:46.449589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 16:49:46.449596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 16:49:46.449602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.575 [2024-12-14 16:49:46.449616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 16:49:46.459573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 16:49:46.459621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 16:49:46.459633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 16:49:46.459640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 16:49:46.459646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.575 [2024-12-14 16:49:46.459661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 16:49:46.469625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 16:49:46.469677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 16:49:46.469689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 16:49:46.469698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 16:49:46.469703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.575 [2024-12-14 16:49:46.469718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.575 qpair failed and we were unable to recover it. 00:36:16.575 [2024-12-14 16:49:46.479694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.575 [2024-12-14 16:49:46.479748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.575 [2024-12-14 16:49:46.479761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.575 [2024-12-14 16:49:46.479767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.575 [2024-12-14 16:49:46.479773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.479788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.489709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.489767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.489781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.489787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.489793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.489808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.499746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.499796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.499808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.499814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.499820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.499835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.509767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.509822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.509834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.509840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.509846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.509863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.519809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.519868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.519880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.519886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.519892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.519906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.529813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.529865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.529878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.529884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.529889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.529903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.539874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.539934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.539947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.539953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.539959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.539974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.549821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.549874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.549887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.549893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.549899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.549913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.559867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.559926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.559939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.559945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.559951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.559965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.569944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.569997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.570011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.570017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.570023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.570037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.579896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.579959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.579971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.579977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.579983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.579998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.589975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.590031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.590044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.590050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.590056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.590071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.599984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.600039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.600051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.600061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.600067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.600081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.610083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.576 [2024-12-14 16:49:46.610140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.576 [2024-12-14 16:49:46.610152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.576 [2024-12-14 16:49:46.610158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.576 [2024-12-14 16:49:46.610164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.576 [2024-12-14 16:49:46.610179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.576 qpair failed and we were unable to recover it. 00:36:16.576 [2024-12-14 16:49:46.620062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.577 [2024-12-14 16:49:46.620116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.577 [2024-12-14 16:49:46.620129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.577 [2024-12-14 16:49:46.620135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.577 [2024-12-14 16:49:46.620141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.577 [2024-12-14 16:49:46.620154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.577 qpair failed and we were unable to recover it. 00:36:16.577 [2024-12-14 16:49:46.630117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.577 [2024-12-14 16:49:46.630164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.577 [2024-12-14 16:49:46.630177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.577 [2024-12-14 16:49:46.630183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.577 [2024-12-14 16:49:46.630189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.577 [2024-12-14 16:49:46.630202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.577 qpair failed and we were unable to recover it. 00:36:16.577 [2024-12-14 16:49:46.640165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.577 [2024-12-14 16:49:46.640258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.577 [2024-12-14 16:49:46.640270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.577 [2024-12-14 16:49:46.640276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.577 [2024-12-14 16:49:46.640282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.577 [2024-12-14 16:49:46.640300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.577 qpair failed and we were unable to recover it. 00:36:16.577 [2024-12-14 16:49:46.650109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.577 [2024-12-14 16:49:46.650165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.577 [2024-12-14 16:49:46.650178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.577 [2024-12-14 16:49:46.650184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.577 [2024-12-14 16:49:46.650189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.577 [2024-12-14 16:49:46.650203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.577 qpair failed and we were unable to recover it. 00:36:16.836 [2024-12-14 16:49:46.660250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.836 [2024-12-14 16:49:46.660312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.836 [2024-12-14 16:49:46.660324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.836 [2024-12-14 16:49:46.660331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.836 [2024-12-14 16:49:46.660336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.836 [2024-12-14 16:49:46.660352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.836 qpair failed and we were unable to recover it. 00:36:16.836 [2024-12-14 16:49:46.670301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.836 [2024-12-14 16:49:46.670392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.836 [2024-12-14 16:49:46.670405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.836 [2024-12-14 16:49:46.670411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.836 [2024-12-14 16:49:46.670417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.836 [2024-12-14 16:49:46.670431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.836 qpair failed and we were unable to recover it. 00:36:16.836 [2024-12-14 16:49:46.680260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.836 [2024-12-14 16:49:46.680315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.836 [2024-12-14 16:49:46.680328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.836 [2024-12-14 16:49:46.680334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.836 [2024-12-14 16:49:46.680340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.836 [2024-12-14 16:49:46.680355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.836 qpair failed and we were unable to recover it. 00:36:16.836 [2024-12-14 16:49:46.690323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.836 [2024-12-14 16:49:46.690381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.690394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.690401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.690406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.690421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.700364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.700414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.700427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.700433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.700438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.700453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.710366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.710419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.710432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.710438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.710444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.710458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.720404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.720462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.720476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.720482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.720488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.720502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.730334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.730398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.730416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.730422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.730430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.730445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.740356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.740434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.740449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.740455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.740461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.740477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.750458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.750509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.750523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.750529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.750535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.750549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.760540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.760597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.760609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.760616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.760622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.760637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.770535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.770618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.770630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.770637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.770646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.770660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.780542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.780595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.780607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.780614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.780619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.780634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.790571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.790624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.790636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.790642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.790648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.790663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.800621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.800687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.800700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.800706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.800712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.800727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.810643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.810696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.810709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.810715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.837 [2024-12-14 16:49:46.810721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.837 [2024-12-14 16:49:46.810735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.837 qpair failed and we were unable to recover it. 00:36:16.837 [2024-12-14 16:49:46.820678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.837 [2024-12-14 16:49:46.820738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.837 [2024-12-14 16:49:46.820751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.837 [2024-12-14 16:49:46.820757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 16:49:46.820763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.838 [2024-12-14 16:49:46.820777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 16:49:46.830678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 16:49:46.830730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 16:49:46.830742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 16:49:46.830749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 16:49:46.830754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.838 [2024-12-14 16:49:46.830769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 16:49:46.840731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 16:49:46.840787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 16:49:46.840799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 16:49:46.840806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 16:49:46.840811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.838 [2024-12-14 16:49:46.840826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 16:49:46.850749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 16:49:46.850808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 16:49:46.850821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 16:49:46.850828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 16:49:46.850833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.838 [2024-12-14 16:49:46.850848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 16:49:46.860769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 16:49:46.860818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 16:49:46.860834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 16:49:46.860840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 16:49:46.860846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.838 [2024-12-14 16:49:46.860860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 16:49:46.870799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 16:49:46.870854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 16:49:46.870867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 16:49:46.870873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 16:49:46.870879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.838 [2024-12-14 16:49:46.870894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 16:49:46.880864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 16:49:46.880920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 16:49:46.880933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 16:49:46.880940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 16:49:46.880946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.838 [2024-12-14 16:49:46.880961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 16:49:46.890871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 16:49:46.890963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 16:49:46.890976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 16:49:46.890982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 16:49:46.890987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.838 [2024-12-14 16:49:46.891001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 16:49:46.900850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 16:49:46.900909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 16:49:46.900922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 16:49:46.900928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 16:49:46.900937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.838 [2024-12-14 16:49:46.900951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:16.838 [2024-12-14 16:49:46.910917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.838 [2024-12-14 16:49:46.910968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.838 [2024-12-14 16:49:46.910981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.838 [2024-12-14 16:49:46.910987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.838 [2024-12-14 16:49:46.910993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:16.838 [2024-12-14 16:49:46.911007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:16.838 qpair failed and we were unable to recover it. 00:36:17.098 [2024-12-14 16:49:46.920943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.098 [2024-12-14 16:49:46.921018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.098 [2024-12-14 16:49:46.921030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.098 [2024-12-14 16:49:46.921037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.098 [2024-12-14 16:49:46.921042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.098 [2024-12-14 16:49:46.921057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.098 qpair failed and we were unable to recover it. 00:36:17.098 [2024-12-14 16:49:46.930983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.098 [2024-12-14 16:49:46.931052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.098 [2024-12-14 16:49:46.931064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.098 [2024-12-14 16:49:46.931070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.098 [2024-12-14 16:49:46.931076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.098 [2024-12-14 16:49:46.931090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.098 qpair failed and we were unable to recover it. 00:36:17.098 [2024-12-14 16:49:46.941058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.098 [2024-12-14 16:49:46.941106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.098 [2024-12-14 16:49:46.941118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.098 [2024-12-14 16:49:46.941124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.098 [2024-12-14 16:49:46.941129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.098 [2024-12-14 16:49:46.941143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.098 qpair failed and we were unable to recover it. 00:36:17.098 [2024-12-14 16:49:46.951028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.098 [2024-12-14 16:49:46.951088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.098 [2024-12-14 16:49:46.951101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.098 [2024-12-14 16:49:46.951108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.098 [2024-12-14 16:49:46.951114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.098 [2024-12-14 16:49:46.951130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.098 qpair failed and we were unable to recover it. 00:36:17.098 [2024-12-14 16:49:46.961055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.098 [2024-12-14 16:49:46.961111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.098 [2024-12-14 16:49:46.961123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.098 [2024-12-14 16:49:46.961130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.098 [2024-12-14 16:49:46.961135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.098 [2024-12-14 16:49:46.961150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.098 qpair failed and we were unable to recover it. 00:36:17.098 [2024-12-14 16:49:46.971110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.098 [2024-12-14 16:49:46.971177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.098 [2024-12-14 16:49:46.971191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.098 [2024-12-14 16:49:46.971198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.098 [2024-12-14 16:49:46.971204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.098 [2024-12-14 16:49:46.971219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.098 qpair failed and we were unable to recover it. 00:36:17.098 [2024-12-14 16:49:46.981112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.098 [2024-12-14 16:49:46.981168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.098 [2024-12-14 16:49:46.981181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.098 [2024-12-14 16:49:46.981187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.098 [2024-12-14 16:49:46.981193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.098 [2024-12-14 16:49:46.981208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.098 qpair failed and we were unable to recover it. 00:36:17.098 [2024-12-14 16:49:46.991126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.098 [2024-12-14 16:49:46.991189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.098 [2024-12-14 16:49:46.991204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.098 [2024-12-14 16:49:46.991211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.098 [2024-12-14 16:49:46.991216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.098 [2024-12-14 16:49:46.991232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.098 qpair failed and we were unable to recover it. 00:36:17.098 [2024-12-14 16:49:47.001155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.098 [2024-12-14 16:49:47.001212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.098 [2024-12-14 16:49:47.001225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.098 [2024-12-14 16:49:47.001232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.098 [2024-12-14 16:49:47.001237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.098 [2024-12-14 16:49:47.001252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.098 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.011193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.011245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.011257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.011263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.011269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.011283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.021154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.021208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.021221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.021227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.021233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.021247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.031262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.031326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.031338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.031348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.031354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.031368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.041280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.041337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.041350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.041356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.041362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.041376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.051297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.051347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.051361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.051367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.051372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.051387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.061347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.061400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.061413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.061419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.061425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.061439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.071340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.071393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.071406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.071412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.071418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.071435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.081357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.081414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.081427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.081433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.081439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.081454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.091398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.091452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.091465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.091471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.091477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.091491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.101430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.101489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.101502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.101509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.101515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.101529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.111453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.111501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.111514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.111520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.111525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.111540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.121504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.121566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.121579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.121585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.121591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.121606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.131515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.099 [2024-12-14 16:49:47.131578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.099 [2024-12-14 16:49:47.131591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.099 [2024-12-14 16:49:47.131597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.099 [2024-12-14 16:49:47.131603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.099 [2024-12-14 16:49:47.131618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.099 qpair failed and we were unable to recover it. 00:36:17.099 [2024-12-14 16:49:47.141549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 16:49:47.141609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 16:49:47.141621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 16:49:47.141627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 16:49:47.141633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.100 [2024-12-14 16:49:47.141648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 16:49:47.151576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 16:49:47.151628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 16:49:47.151641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 16:49:47.151647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 16:49:47.151653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.100 [2024-12-14 16:49:47.151667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 16:49:47.161601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 16:49:47.161656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 16:49:47.161671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 16:49:47.161678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 16:49:47.161684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.100 [2024-12-14 16:49:47.161698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 16:49:47.171628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 16:49:47.171681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 16:49:47.171694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 16:49:47.171700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 16:49:47.171705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.100 [2024-12-14 16:49:47.171720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.100 [2024-12-14 16:49:47.181674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.100 [2024-12-14 16:49:47.181751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.100 [2024-12-14 16:49:47.181763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.100 [2024-12-14 16:49:47.181769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.100 [2024-12-14 16:49:47.181775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.100 [2024-12-14 16:49:47.181790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.100 qpair failed and we were unable to recover it. 00:36:17.359 [2024-12-14 16:49:47.191679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.359 [2024-12-14 16:49:47.191731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.359 [2024-12-14 16:49:47.191743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.191749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.191755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.191769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.201733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.201797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.201808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.201814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.201820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.201838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.211759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.211815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.211827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.211833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.211839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.211853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.221799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.221852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.221865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.221871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.221877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.221891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.231801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.231853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.231865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.231872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.231878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.231892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.241847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.241926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.241938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.241944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.241950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.241964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.251867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.251923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.251935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.251941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.251947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.251961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.261898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.261950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.261962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.261968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.261974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.261988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.271898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.271956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.271968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.271975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.271980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.271995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.281955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.282013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.282026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.282032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.282038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.282052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.291986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.292041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.292056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.292063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.292068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.292082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.302021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.302075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.302087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.302093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.302098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.302112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.312034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.312090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.312103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.312109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.360 [2024-12-14 16:49:47.312115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.360 [2024-12-14 16:49:47.312129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.360 qpair failed and we were unable to recover it. 00:36:17.360 [2024-12-14 16:49:47.322094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.360 [2024-12-14 16:49:47.322149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.360 [2024-12-14 16:49:47.322161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.360 [2024-12-14 16:49:47.322168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.322173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.322187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.332166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.332231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.332245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.332252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.332261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.332276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.342123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.342176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.342189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.342195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.342200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.342215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.352150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.352247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.352260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.352266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.352271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.352286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.362189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.362244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.362256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.362263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.362268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.362283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.372221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.372276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.372289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.372296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.372302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.372316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.382208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.382299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.382312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.382318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.382324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.382337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.392254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.392335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.392348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.392354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.392360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.392375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.402326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.402385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.402397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.402403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.402409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.402423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.412366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.412422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.412435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.412441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.412447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.412461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.422352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.422403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.422419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.422425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.422431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.422445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.432410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.432464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.432476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.432483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.432488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.432503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.361 [2024-12-14 16:49:47.442455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.361 [2024-12-14 16:49:47.442508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.361 [2024-12-14 16:49:47.442521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.361 [2024-12-14 16:49:47.442527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.361 [2024-12-14 16:49:47.442533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.361 [2024-12-14 16:49:47.442547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.361 qpair failed and we were unable to recover it. 00:36:17.621 [2024-12-14 16:49:47.452430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.621 [2024-12-14 16:49:47.452488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.621 [2024-12-14 16:49:47.452501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.621 [2024-12-14 16:49:47.452507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.621 [2024-12-14 16:49:47.452513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.621 [2024-12-14 16:49:47.452527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.621 qpair failed and we were unable to recover it. 00:36:17.621 [2024-12-14 16:49:47.462512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.621 [2024-12-14 16:49:47.462570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.621 [2024-12-14 16:49:47.462583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.621 [2024-12-14 16:49:47.462593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.621 [2024-12-14 16:49:47.462598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.621 [2024-12-14 16:49:47.462614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.621 qpair failed and we were unable to recover it. 00:36:17.621 [2024-12-14 16:49:47.472487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.621 [2024-12-14 16:49:47.472538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.621 [2024-12-14 16:49:47.472550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.621 [2024-12-14 16:49:47.472559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.621 [2024-12-14 16:49:47.472565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.621 [2024-12-14 16:49:47.472580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.621 qpair failed and we were unable to recover it. 00:36:17.621 [2024-12-14 16:49:47.482540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.621 [2024-12-14 16:49:47.482600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.621 [2024-12-14 16:49:47.482613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.621 [2024-12-14 16:49:47.482620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.621 [2024-12-14 16:49:47.482626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.621 [2024-12-14 16:49:47.482641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.621 qpair failed and we were unable to recover it. 00:36:17.621 [2024-12-14 16:49:47.492595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.621 [2024-12-14 16:49:47.492651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.621 [2024-12-14 16:49:47.492663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.621 [2024-12-14 16:49:47.492669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.621 [2024-12-14 16:49:47.492675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.621 [2024-12-14 16:49:47.492690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.621 qpair failed and we were unable to recover it. 00:36:17.621 [2024-12-14 16:49:47.502599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.621 [2024-12-14 16:49:47.502654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.621 [2024-12-14 16:49:47.502667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.621 [2024-12-14 16:49:47.502673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.621 [2024-12-14 16:49:47.502679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.621 [2024-12-14 16:49:47.502693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.621 qpair failed and we were unable to recover it. 00:36:17.621 [2024-12-14 16:49:47.512619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.512671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.512684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.512690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.512696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.512710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.522652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.522708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.522720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.522726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.522732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.522746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.532685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.532749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.532761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.532768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.532773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.532787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.542705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.542760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.542772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.542779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.542785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.542799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.552729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.552783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.552795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.552801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.552807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.552820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.562769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.562826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.562838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.562844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.562850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.562864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.572793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.572847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.572859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.572866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.572871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.572886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.582830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.582888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.582900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.582907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.582913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.582927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.592843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.592900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.592912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.592922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.592928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.592943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.602885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.602956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.602968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.602974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.602980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.602994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.612925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.612982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.612994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.613000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.613006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.613020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.622940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.622992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.623005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.623012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.623017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.623031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.632970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.633021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.633033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.622 [2024-12-14 16:49:47.633040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.622 [2024-12-14 16:49:47.633045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.622 [2024-12-14 16:49:47.633062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.622 qpair failed and we were unable to recover it. 00:36:17.622 [2024-12-14 16:49:47.643011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.622 [2024-12-14 16:49:47.643065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.622 [2024-12-14 16:49:47.643077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.623 [2024-12-14 16:49:47.643084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.623 [2024-12-14 16:49:47.643089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.623 [2024-12-14 16:49:47.643103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.623 qpair failed and we were unable to recover it. 00:36:17.623 [2024-12-14 16:49:47.653042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.623 [2024-12-14 16:49:47.653097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.623 [2024-12-14 16:49:47.653110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.623 [2024-12-14 16:49:47.653116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.623 [2024-12-14 16:49:47.653122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.623 [2024-12-14 16:49:47.653135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.623 qpair failed and we were unable to recover it. 00:36:17.623 [2024-12-14 16:49:47.663085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.623 [2024-12-14 16:49:47.663186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.623 [2024-12-14 16:49:47.663198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.623 [2024-12-14 16:49:47.663204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.623 [2024-12-14 16:49:47.663210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.623 [2024-12-14 16:49:47.663224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.623 qpair failed and we were unable to recover it. 00:36:17.623 [2024-12-14 16:49:47.673098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.623 [2024-12-14 16:49:47.673152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.623 [2024-12-14 16:49:47.673164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.623 [2024-12-14 16:49:47.673170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.623 [2024-12-14 16:49:47.673176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.623 [2024-12-14 16:49:47.673190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.623 qpair failed and we were unable to recover it. 00:36:17.623 [2024-12-14 16:49:47.683126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.623 [2024-12-14 16:49:47.683188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.623 [2024-12-14 16:49:47.683200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.623 [2024-12-14 16:49:47.683206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.623 [2024-12-14 16:49:47.683212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.623 [2024-12-14 16:49:47.683226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.623 qpair failed and we were unable to recover it. 00:36:17.623 [2024-12-14 16:49:47.693165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.623 [2024-12-14 16:49:47.693218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.623 [2024-12-14 16:49:47.693230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.623 [2024-12-14 16:49:47.693236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.623 [2024-12-14 16:49:47.693242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.623 [2024-12-14 16:49:47.693256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.623 qpair failed and we were unable to recover it. 00:36:17.623 [2024-12-14 16:49:47.703222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.623 [2024-12-14 16:49:47.703278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.623 [2024-12-14 16:49:47.703290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.623 [2024-12-14 16:49:47.703296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.623 [2024-12-14 16:49:47.703302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.623 [2024-12-14 16:49:47.703316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.623 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.713207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.713256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.713268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.713274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.713281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.713295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.723263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.723349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.723367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.723373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.723379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.723394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.733298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.733355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.733368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.733374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.733380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.733394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.743246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.743299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.743311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.743318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.743324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.743338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.753328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.753381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.753394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.753400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.753406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.753421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.763406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.763463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.763475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.763482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.763488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.763507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.773376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.773449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.773462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.773468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.773474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.773488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.783414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.783471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.783483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.783490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.783495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.783510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.793440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.793491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.793504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.793511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.793516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.793531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.803483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.803537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.803549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.803559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.803565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.803580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.813505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.813562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.813576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.813583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.813588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.813604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.823536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.823595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.883 [2024-12-14 16:49:47.823608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.883 [2024-12-14 16:49:47.823614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.883 [2024-12-14 16:49:47.823620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.883 [2024-12-14 16:49:47.823635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.883 qpair failed and we were unable to recover it. 00:36:17.883 [2024-12-14 16:49:47.833551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.883 [2024-12-14 16:49:47.833603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.833615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.833621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.833628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.833643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.843589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.843645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.843657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.843664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.843670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.843684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.853619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.853713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.853729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.853736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.853741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.853756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.863621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.863671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.863684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.863690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.863696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.863710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.873600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.873698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.873710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.873716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.873721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.873735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.883648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.883703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.883716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.883722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.883727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.883741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.893749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.893800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.893812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.893818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.893827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.893841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.903700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.903756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.903768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.903774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.903780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.903794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.913723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.913779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.913791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.913798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.913803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.913817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.923834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.923891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.923903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.923909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.923915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.923929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.933895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.933948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.933960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.933966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.933972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.933986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.943822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.943877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.943890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.943896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.943902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.943916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.953855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.953907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.884 [2024-12-14 16:49:47.953920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.884 [2024-12-14 16:49:47.953926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.884 [2024-12-14 16:49:47.953932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.884 [2024-12-14 16:49:47.953946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.884 qpair failed and we were unable to recover it. 00:36:17.884 [2024-12-14 16:49:47.963874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.884 [2024-12-14 16:49:47.963938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.885 [2024-12-14 16:49:47.963950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.885 [2024-12-14 16:49:47.963957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.885 [2024-12-14 16:49:47.963962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:17.885 [2024-12-14 16:49:47.963977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.885 qpair failed and we were unable to recover it. 00:36:18.144 [2024-12-14 16:49:47.973891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.144 [2024-12-14 16:49:47.973962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.144 [2024-12-14 16:49:47.973976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.144 [2024-12-14 16:49:47.973983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.144 [2024-12-14 16:49:47.973989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.144 [2024-12-14 16:49:47.974004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-12-14 16:49:47.984016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.144 [2024-12-14 16:49:47.984070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.144 [2024-12-14 16:49:47.984086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.144 [2024-12-14 16:49:47.984093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.144 [2024-12-14 16:49:47.984099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.144 [2024-12-14 16:49:47.984113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-12-14 16:49:47.993969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.144 [2024-12-14 16:49:47.994025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.144 [2024-12-14 16:49:47.994038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.144 [2024-12-14 16:49:47.994044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.144 [2024-12-14 16:49:47.994049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.144 [2024-12-14 16:49:47.994063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-12-14 16:49:48.004007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.144 [2024-12-14 16:49:48.004061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.144 [2024-12-14 16:49:48.004074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.144 [2024-12-14 16:49:48.004081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.144 [2024-12-14 16:49:48.004089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.144 [2024-12-14 16:49:48.004106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-12-14 16:49:48.014019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.144 [2024-12-14 16:49:48.014073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.144 [2024-12-14 16:49:48.014086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.144 [2024-12-14 16:49:48.014092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.144 [2024-12-14 16:49:48.014098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.144 [2024-12-14 16:49:48.014112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-12-14 16:49:48.024082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.144 [2024-12-14 16:49:48.024173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.144 [2024-12-14 16:49:48.024187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.144 [2024-12-14 16:49:48.024197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.144 [2024-12-14 16:49:48.024205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.144 [2024-12-14 16:49:48.024219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-12-14 16:49:48.034088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.144 [2024-12-14 16:49:48.034138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.144 [2024-12-14 16:49:48.034151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.144 [2024-12-14 16:49:48.034157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.144 [2024-12-14 16:49:48.034163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.144 [2024-12-14 16:49:48.034178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-12-14 16:49:48.044174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.144 [2024-12-14 16:49:48.044230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.144 [2024-12-14 16:49:48.044243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.144 [2024-12-14 16:49:48.044250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.144 [2024-12-14 16:49:48.044255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.144 [2024-12-14 16:49:48.044270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-12-14 16:49:48.054141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.144 [2024-12-14 16:49:48.054194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.144 [2024-12-14 16:49:48.054207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.144 [2024-12-14 16:49:48.054213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.144 [2024-12-14 16:49:48.054220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.144 [2024-12-14 16:49:48.054234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-12-14 16:49:48.064230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.144 [2024-12-14 16:49:48.064280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.144 [2024-12-14 16:49:48.064293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.144 [2024-12-14 16:49:48.064299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.144 [2024-12-14 16:49:48.064305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.144 [2024-12-14 16:49:48.064320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.144 qpair failed and we were unable to recover it. 00:36:18.144 [2024-12-14 16:49:48.074252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.144 [2024-12-14 16:49:48.074305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.144 [2024-12-14 16:49:48.074318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.144 [2024-12-14 16:49:48.074324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.074330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.074345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.084302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.084357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.084370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.084376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.084382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.084397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.094248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.094302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.094316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.094323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.094329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.094342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.104325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.104378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.104390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.104397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.104403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.104418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.114369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.114429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.114442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.114448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.114455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.114470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.124340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.124396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.124409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.124416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.124422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.124438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.134412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.134472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.134484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.134491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.134497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.134512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.144390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.144445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.144458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.144465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.144471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.144487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.154472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.154526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.154540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.154550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.154560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.154577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.164491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.164547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.164563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.164571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.164577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.164592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.174540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.174600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.174614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.174621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.174627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.174642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.184664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.184733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.184746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.184753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.184759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.184775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.194553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.194609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.194622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.194629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.194635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.194652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.145 [2024-12-14 16:49:48.204710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.145 [2024-12-14 16:49:48.204773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.145 [2024-12-14 16:49:48.204786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.145 [2024-12-14 16:49:48.204793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.145 [2024-12-14 16:49:48.204799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.145 [2024-12-14 16:49:48.204814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.145 qpair failed and we were unable to recover it. 00:36:18.146 [2024-12-14 16:49:48.214683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.146 [2024-12-14 16:49:48.214739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.146 [2024-12-14 16:49:48.214752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.146 [2024-12-14 16:49:48.214759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.146 [2024-12-14 16:49:48.214766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.146 [2024-12-14 16:49:48.214781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.146 [2024-12-14 16:49:48.224677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.146 [2024-12-14 16:49:48.224734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.146 [2024-12-14 16:49:48.224747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.146 [2024-12-14 16:49:48.224755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.146 [2024-12-14 16:49:48.224761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.146 [2024-12-14 16:49:48.224777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.146 qpair failed and we were unable to recover it. 00:36:18.404 [2024-12-14 16:49:48.234691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.404 [2024-12-14 16:49:48.234741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.404 [2024-12-14 16:49:48.234754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.404 [2024-12-14 16:49:48.234761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.404 [2024-12-14 16:49:48.234766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.404 [2024-12-14 16:49:48.234782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.404 qpair failed and we were unable to recover it. 00:36:18.404 [2024-12-14 16:49:48.244706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.404 [2024-12-14 16:49:48.244778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.404 [2024-12-14 16:49:48.244790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.404 [2024-12-14 16:49:48.244797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.404 [2024-12-14 16:49:48.244803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.404 [2024-12-14 16:49:48.244818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.254773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.254829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.254842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.254849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.254855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.254870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.264802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.264901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.264914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.264921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.264927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.264942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.274809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.274860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.274873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.274880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.274886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.274901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.284861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.284915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.284931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.284938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.284945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.284959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.294820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.294873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.294885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.294891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.294897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.294912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.304902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.304952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.304965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.304971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.304977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.304993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.314924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.314977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.314989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.314996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.315002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.315017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.324948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.325003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.325016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.325023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.325032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.325048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.334985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.335059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.335072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.335079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.335085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.335100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.344938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.344992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.345005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.345011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.345018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.345033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.355080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.355135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.355149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.355155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.355162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.355177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.365081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.365138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.365152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.365158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.365165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.365179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.405 qpair failed and we were unable to recover it. 00:36:18.405 [2024-12-14 16:49:48.375094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.405 [2024-12-14 16:49:48.375151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.405 [2024-12-14 16:49:48.375165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.405 [2024-12-14 16:49:48.375171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.405 [2024-12-14 16:49:48.375178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.405 [2024-12-14 16:49:48.375192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.406 [2024-12-14 16:49:48.385125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.406 [2024-12-14 16:49:48.385184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.406 [2024-12-14 16:49:48.385197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.406 [2024-12-14 16:49:48.385203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.406 [2024-12-14 16:49:48.385209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.406 [2024-12-14 16:49:48.385225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.406 [2024-12-14 16:49:48.395122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.406 [2024-12-14 16:49:48.395185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.406 [2024-12-14 16:49:48.395198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.406 [2024-12-14 16:49:48.395205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.406 [2024-12-14 16:49:48.395211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.406 [2024-12-14 16:49:48.395226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.406 [2024-12-14 16:49:48.405196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.406 [2024-12-14 16:49:48.405252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.406 [2024-12-14 16:49:48.405264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.406 [2024-12-14 16:49:48.405271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.406 [2024-12-14 16:49:48.405278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.406 [2024-12-14 16:49:48.405292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.406 [2024-12-14 16:49:48.415208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.406 [2024-12-14 16:49:48.415266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.406 [2024-12-14 16:49:48.415282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.406 [2024-12-14 16:49:48.415289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.406 [2024-12-14 16:49:48.415295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.406 [2024-12-14 16:49:48.415310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.406 [2024-12-14 16:49:48.425228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.406 [2024-12-14 16:49:48.425312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.406 [2024-12-14 16:49:48.425325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.406 [2024-12-14 16:49:48.425332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.406 [2024-12-14 16:49:48.425338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.406 [2024-12-14 16:49:48.425353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.406 [2024-12-14 16:49:48.435262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.406 [2024-12-14 16:49:48.435323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.406 [2024-12-14 16:49:48.435336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.406 [2024-12-14 16:49:48.435343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.406 [2024-12-14 16:49:48.435350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.406 [2024-12-14 16:49:48.435365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.406 [2024-12-14 16:49:48.445300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.406 [2024-12-14 16:49:48.445357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.406 [2024-12-14 16:49:48.445370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.406 [2024-12-14 16:49:48.445377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.406 [2024-12-14 16:49:48.445383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.406 [2024-12-14 16:49:48.445399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.406 [2024-12-14 16:49:48.455320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.406 [2024-12-14 16:49:48.455372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.406 [2024-12-14 16:49:48.455384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.406 [2024-12-14 16:49:48.455391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.406 [2024-12-14 16:49:48.455403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.406 [2024-12-14 16:49:48.455418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.406 [2024-12-14 16:49:48.465379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.406 [2024-12-14 16:49:48.465444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.406 [2024-12-14 16:49:48.465457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.406 [2024-12-14 16:49:48.465463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.406 [2024-12-14 16:49:48.465469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.406 [2024-12-14 16:49:48.465484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.406 [2024-12-14 16:49:48.475349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.406 [2024-12-14 16:49:48.475449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.406 [2024-12-14 16:49:48.475462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.406 [2024-12-14 16:49:48.475469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.406 [2024-12-14 16:49:48.475475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.406 [2024-12-14 16:49:48.475488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.406 [2024-12-14 16:49:48.485410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.406 [2024-12-14 16:49:48.485463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.406 [2024-12-14 16:49:48.485476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.406 [2024-12-14 16:49:48.485483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.406 [2024-12-14 16:49:48.485488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.406 [2024-12-14 16:49:48.485503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.406 qpair failed and we were unable to recover it. 00:36:18.665 [2024-12-14 16:49:48.495366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.665 [2024-12-14 16:49:48.495429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.665 [2024-12-14 16:49:48.495442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.665 [2024-12-14 16:49:48.495449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.665 [2024-12-14 16:49:48.495455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.665 [2024-12-14 16:49:48.495470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.665 qpair failed and we were unable to recover it. 00:36:18.665 [2024-12-14 16:49:48.505476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.665 [2024-12-14 16:49:48.505529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.665 [2024-12-14 16:49:48.505542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.665 [2024-12-14 16:49:48.505549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.665 [2024-12-14 16:49:48.505560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.665 [2024-12-14 16:49:48.505576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.665 qpair failed and we were unable to recover it. 00:36:18.665 [2024-12-14 16:49:48.515425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.665 [2024-12-14 16:49:48.515523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.665 [2024-12-14 16:49:48.515537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.665 [2024-12-14 16:49:48.515543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.665 [2024-12-14 16:49:48.515550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.665 [2024-12-14 16:49:48.515569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.665 qpair failed and we were unable to recover it. 00:36:18.665 [2024-12-14 16:49:48.525479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.665 [2024-12-14 16:49:48.525567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.665 [2024-12-14 16:49:48.525581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.665 [2024-12-14 16:49:48.525588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.665 [2024-12-14 16:49:48.525594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.665 [2024-12-14 16:49:48.525609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.665 qpair failed and we were unable to recover it. 00:36:18.665 [2024-12-14 16:49:48.535480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.665 [2024-12-14 16:49:48.535531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.665 [2024-12-14 16:49:48.535545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.665 [2024-12-14 16:49:48.535552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.665 [2024-12-14 16:49:48.535561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.665 [2024-12-14 16:49:48.535576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.665 qpair failed and we were unable to recover it. 00:36:18.665 [2024-12-14 16:49:48.545580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.665 [2024-12-14 16:49:48.545632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.665 [2024-12-14 16:49:48.545655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.665 [2024-12-14 16:49:48.545665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.665 [2024-12-14 16:49:48.545671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.665 [2024-12-14 16:49:48.545688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.665 qpair failed and we were unable to recover it. 00:36:18.665 [2024-12-14 16:49:48.555608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.665 [2024-12-14 16:49:48.555659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.665 [2024-12-14 16:49:48.555672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.665 [2024-12-14 16:49:48.555678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.665 [2024-12-14 16:49:48.555685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.665 [2024-12-14 16:49:48.555700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.665 qpair failed and we were unable to recover it. 00:36:18.665 [2024-12-14 16:49:48.565675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.665 [2024-12-14 16:49:48.565730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.665 [2024-12-14 16:49:48.565743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.665 [2024-12-14 16:49:48.565750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.565757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.565772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.575661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.575714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.575727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.575733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.575740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.575755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.585697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.585748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.585760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.585770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.585776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.585792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.595760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.595824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.595837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.595844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.595850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.595865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.605789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.605843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.605856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.605863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.605870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.605885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.615774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.615827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.615840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.615847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.615853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.615868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.625855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.625910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.625923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.625930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.625936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.625951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.635826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.635890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.635902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.635910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.635916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.635931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.645868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.645921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.645933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.645940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.645946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.645961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.655888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.655940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.655953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.655959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.655965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.655980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.665904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.665970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.665983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.665990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.665996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.666011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.675979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.676045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.676059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.676066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.676073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.676087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.685983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.686039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.686052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.686059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.686065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.666 [2024-12-14 16:49:48.686080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.666 qpair failed and we were unable to recover it. 00:36:18.666 [2024-12-14 16:49:48.696002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.666 [2024-12-14 16:49:48.696054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.666 [2024-12-14 16:49:48.696067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.666 [2024-12-14 16:49:48.696073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.666 [2024-12-14 16:49:48.696079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.667 [2024-12-14 16:49:48.696094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.667 qpair failed and we were unable to recover it. 00:36:18.667 [2024-12-14 16:49:48.706038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.667 [2024-12-14 16:49:48.706093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.667 [2024-12-14 16:49:48.706106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.667 [2024-12-14 16:49:48.706113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.667 [2024-12-14 16:49:48.706119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.667 [2024-12-14 16:49:48.706134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.667 qpair failed and we were unable to recover it. 00:36:18.667 [2024-12-14 16:49:48.715983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.667 [2024-12-14 16:49:48.716040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.667 [2024-12-14 16:49:48.716052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.667 [2024-12-14 16:49:48.716062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.667 [2024-12-14 16:49:48.716068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.667 [2024-12-14 16:49:48.716084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.667 qpair failed and we were unable to recover it. 00:36:18.667 [2024-12-14 16:49:48.726113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.667 [2024-12-14 16:49:48.726167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.667 [2024-12-14 16:49:48.726180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.667 [2024-12-14 16:49:48.726186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.667 [2024-12-14 16:49:48.726192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.667 [2024-12-14 16:49:48.726207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.667 qpair failed and we were unable to recover it. 00:36:18.667 [2024-12-14 16:49:48.736132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.667 [2024-12-14 16:49:48.736214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.667 [2024-12-14 16:49:48.736227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.667 [2024-12-14 16:49:48.736234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.667 [2024-12-14 16:49:48.736240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.667 [2024-12-14 16:49:48.736256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.667 qpair failed and we were unable to recover it. 00:36:18.667 [2024-12-14 16:49:48.746160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.667 [2024-12-14 16:49:48.746209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.667 [2024-12-14 16:49:48.746222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.667 [2024-12-14 16:49:48.746229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.667 [2024-12-14 16:49:48.746235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.667 [2024-12-14 16:49:48.746250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.667 qpair failed and we were unable to recover it. 00:36:18.925 [2024-12-14 16:49:48.756194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.925 [2024-12-14 16:49:48.756272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.925 [2024-12-14 16:49:48.756286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.925 [2024-12-14 16:49:48.756293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.925 [2024-12-14 16:49:48.756298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.925 [2024-12-14 16:49:48.756323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.925 qpair failed and we were unable to recover it. 00:36:18.925 [2024-12-14 16:49:48.766273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.925 [2024-12-14 16:49:48.766328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.925 [2024-12-14 16:49:48.766341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.925 [2024-12-14 16:49:48.766347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.925 [2024-12-14 16:49:48.766354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.925 [2024-12-14 16:49:48.766369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.925 qpair failed and we were unable to recover it. 00:36:18.925 [2024-12-14 16:49:48.776248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.925 [2024-12-14 16:49:48.776303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.925 [2024-12-14 16:49:48.776316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.925 [2024-12-14 16:49:48.776323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.925 [2024-12-14 16:49:48.776329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.925 [2024-12-14 16:49:48.776344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.925 qpair failed and we were unable to recover it. 00:36:18.925 [2024-12-14 16:49:48.786291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.925 [2024-12-14 16:49:48.786346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.925 [2024-12-14 16:49:48.786359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.925 [2024-12-14 16:49:48.786365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.925 [2024-12-14 16:49:48.786372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.925 [2024-12-14 16:49:48.786387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.925 qpair failed and we were unable to recover it. 00:36:18.925 [2024-12-14 16:49:48.796307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.925 [2024-12-14 16:49:48.796362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.796374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.796381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.796387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.796402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.806351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.806408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.806421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.806428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.806434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.806449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.816380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.816439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.816452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.816459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.816466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.816481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.826393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.826447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.826460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.826468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.826475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.826489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.836421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.836475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.836489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.836495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.836501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.836516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.846457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.846511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.846527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.846534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.846540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.846565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.856477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.856536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.856549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.856559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.856566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.856581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.866505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.866561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.866574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.866581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.866587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.866603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.876521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.876579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.876592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.876600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.876606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.876621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.886573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.886629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.886642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.886648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.886658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.886673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.896594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.896652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.896666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.896672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.896680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.896694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.906620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.906674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.906687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.906694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.906701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.906716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.916634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.916691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.916704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.926 [2024-12-14 16:49:48.916711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.926 [2024-12-14 16:49:48.916717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.926 [2024-12-14 16:49:48.916732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.926 qpair failed and we were unable to recover it. 00:36:18.926 [2024-12-14 16:49:48.926678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.926 [2024-12-14 16:49:48.926737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.926 [2024-12-14 16:49:48.926749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.927 [2024-12-14 16:49:48.926756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.927 [2024-12-14 16:49:48.926763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.927 [2024-12-14 16:49:48.926777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.927 qpair failed and we were unable to recover it. 00:36:18.927 [2024-12-14 16:49:48.936677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.927 [2024-12-14 16:49:48.936729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.927 [2024-12-14 16:49:48.936741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.927 [2024-12-14 16:49:48.936748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.927 [2024-12-14 16:49:48.936754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.927 [2024-12-14 16:49:48.936769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.927 qpair failed and we were unable to recover it. 00:36:18.927 [2024-12-14 16:49:48.946736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.927 [2024-12-14 16:49:48.946789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.927 [2024-12-14 16:49:48.946803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.927 [2024-12-14 16:49:48.946809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.927 [2024-12-14 16:49:48.946815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.927 [2024-12-14 16:49:48.946831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.927 qpair failed and we were unable to recover it. 00:36:18.927 [2024-12-14 16:49:48.956751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.927 [2024-12-14 16:49:48.956834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.927 [2024-12-14 16:49:48.956847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.927 [2024-12-14 16:49:48.956854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.927 [2024-12-14 16:49:48.956860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.927 [2024-12-14 16:49:48.956874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.927 qpair failed and we were unable to recover it. 00:36:18.927 [2024-12-14 16:49:48.966814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.927 [2024-12-14 16:49:48.966883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.927 [2024-12-14 16:49:48.966896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.927 [2024-12-14 16:49:48.966903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.927 [2024-12-14 16:49:48.966909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.927 [2024-12-14 16:49:48.966924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.927 qpair failed and we were unable to recover it. 00:36:18.927 [2024-12-14 16:49:48.976821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.927 [2024-12-14 16:49:48.976880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.927 [2024-12-14 16:49:48.976897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.927 [2024-12-14 16:49:48.976905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.927 [2024-12-14 16:49:48.976911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.927 [2024-12-14 16:49:48.976925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.927 qpair failed and we were unable to recover it. 00:36:18.927 [2024-12-14 16:49:48.986820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.927 [2024-12-14 16:49:48.986871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.927 [2024-12-14 16:49:48.986884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.927 [2024-12-14 16:49:48.986891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.927 [2024-12-14 16:49:48.986897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.927 [2024-12-14 16:49:48.986913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.927 qpair failed and we were unable to recover it. 00:36:18.927 [2024-12-14 16:49:48.996923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.927 [2024-12-14 16:49:48.996976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.927 [2024-12-14 16:49:48.996989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.927 [2024-12-14 16:49:48.996995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.927 [2024-12-14 16:49:48.997002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.927 [2024-12-14 16:49:48.997017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.927 qpair failed and we were unable to recover it. 00:36:18.927 [2024-12-14 16:49:49.006902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:18.927 [2024-12-14 16:49:49.006960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:18.927 [2024-12-14 16:49:49.006972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:18.927 [2024-12-14 16:49:49.006979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:18.927 [2024-12-14 16:49:49.006985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:18.927 [2024-12-14 16:49:49.007000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:18.927 qpair failed and we were unable to recover it. 00:36:19.186 [2024-12-14 16:49:49.016936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.186 [2024-12-14 16:49:49.016999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.186 [2024-12-14 16:49:49.017011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.186 [2024-12-14 16:49:49.017018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.186 [2024-12-14 16:49:49.017028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.186 [2024-12-14 16:49:49.017043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.186 qpair failed and we were unable to recover it. 00:36:19.186 [2024-12-14 16:49:49.026970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.186 [2024-12-14 16:49:49.027025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.186 [2024-12-14 16:49:49.027038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.186 [2024-12-14 16:49:49.027046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.186 [2024-12-14 16:49:49.027051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.186 [2024-12-14 16:49:49.027066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.186 qpair failed and we were unable to recover it. 00:36:19.186 [2024-12-14 16:49:49.036992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.186 [2024-12-14 16:49:49.037046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.186 [2024-12-14 16:49:49.037059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.186 [2024-12-14 16:49:49.037066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.186 [2024-12-14 16:49:49.037072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.186 [2024-12-14 16:49:49.037086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.186 qpair failed and we were unable to recover it. 00:36:19.186 [2024-12-14 16:49:49.047057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.186 [2024-12-14 16:49:49.047114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.186 [2024-12-14 16:49:49.047127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.047134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.047140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.047155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.057049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.057106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.057119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.057126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.057134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.057148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.067098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.067153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.067167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.067174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.067181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.067196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.077100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.077154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.077167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.077174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.077180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.077195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.087138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.087193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.087206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.087213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.087219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.087233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.097081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.097148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.097161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.097168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.097174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.097188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.107120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.107177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.107193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.107200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.107206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.107220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.117149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.117201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.117214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.117221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.117227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.117242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.127246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.127349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.127362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.127369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.127375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.127389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.137293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.137355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.137369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.137376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.137382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.137396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.147294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.147345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.147359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.147372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.147378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.147393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.157302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.157355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.157369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.157376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.157382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.157396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.167358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.167431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.167445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.167452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.167458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.167473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.187 [2024-12-14 16:49:49.177382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.187 [2024-12-14 16:49:49.177436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.187 [2024-12-14 16:49:49.177449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.187 [2024-12-14 16:49:49.177456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.187 [2024-12-14 16:49:49.177462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.187 [2024-12-14 16:49:49.177477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.187 qpair failed and we were unable to recover it. 00:36:19.188 [2024-12-14 16:49:49.187411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.188 [2024-12-14 16:49:49.187506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.188 [2024-12-14 16:49:49.187520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.188 [2024-12-14 16:49:49.187527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.188 [2024-12-14 16:49:49.187533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.188 [2024-12-14 16:49:49.187548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.188 qpair failed and we were unable to recover it. 00:36:19.188 [2024-12-14 16:49:49.197360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.188 [2024-12-14 16:49:49.197416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.188 [2024-12-14 16:49:49.197430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.188 [2024-12-14 16:49:49.197437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.188 [2024-12-14 16:49:49.197444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.188 [2024-12-14 16:49:49.197462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.188 qpair failed and we were unable to recover it. 00:36:19.188 [2024-12-14 16:49:49.207468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.188 [2024-12-14 16:49:49.207523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.188 [2024-12-14 16:49:49.207538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.188 [2024-12-14 16:49:49.207545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.188 [2024-12-14 16:49:49.207552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.188 [2024-12-14 16:49:49.207571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.188 qpair failed and we were unable to recover it. 00:36:19.188 [2024-12-14 16:49:49.217477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.188 [2024-12-14 16:49:49.217532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.188 [2024-12-14 16:49:49.217545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.188 [2024-12-14 16:49:49.217552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.188 [2024-12-14 16:49:49.217564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.188 [2024-12-14 16:49:49.217580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.188 qpair failed and we were unable to recover it. 00:36:19.188 [2024-12-14 16:49:49.227518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.188 [2024-12-14 16:49:49.227577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.188 [2024-12-14 16:49:49.227590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.188 [2024-12-14 16:49:49.227597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.188 [2024-12-14 16:49:49.227603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.188 [2024-12-14 16:49:49.227619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.188 qpair failed and we were unable to recover it. 00:36:19.188 [2024-12-14 16:49:49.237468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.188 [2024-12-14 16:49:49.237527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.188 [2024-12-14 16:49:49.237540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.188 [2024-12-14 16:49:49.237546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.188 [2024-12-14 16:49:49.237553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.188 [2024-12-14 16:49:49.237571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.188 qpair failed and we were unable to recover it. 00:36:19.188 [2024-12-14 16:49:49.247586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.188 [2024-12-14 16:49:49.247643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.188 [2024-12-14 16:49:49.247657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.188 [2024-12-14 16:49:49.247663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.188 [2024-12-14 16:49:49.247670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.188 [2024-12-14 16:49:49.247686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.188 qpair failed and we were unable to recover it. 00:36:19.188 [2024-12-14 16:49:49.257614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.188 [2024-12-14 16:49:49.257668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.188 [2024-12-14 16:49:49.257681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.188 [2024-12-14 16:49:49.257688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.188 [2024-12-14 16:49:49.257694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.188 [2024-12-14 16:49:49.257709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.188 qpair failed and we were unable to recover it. 00:36:19.188 [2024-12-14 16:49:49.267638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.188 [2024-12-14 16:49:49.267700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.188 [2024-12-14 16:49:49.267712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.188 [2024-12-14 16:49:49.267720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.188 [2024-12-14 16:49:49.267726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.188 [2024-12-14 16:49:49.267741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.188 qpair failed and we were unable to recover it. 00:36:19.447 [2024-12-14 16:49:49.277605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.447 [2024-12-14 16:49:49.277680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.447 [2024-12-14 16:49:49.277693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.447 [2024-12-14 16:49:49.277704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.447 [2024-12-14 16:49:49.277710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.447 [2024-12-14 16:49:49.277726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.447 qpair failed and we were unable to recover it. 00:36:19.447 [2024-12-14 16:49:49.287677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.447 [2024-12-14 16:49:49.287731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.447 [2024-12-14 16:49:49.287744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.447 [2024-12-14 16:49:49.287751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.447 [2024-12-14 16:49:49.287758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.447 [2024-12-14 16:49:49.287775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.447 qpair failed and we were unable to recover it. 00:36:19.447 [2024-12-14 16:49:49.297666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.447 [2024-12-14 16:49:49.297742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.447 [2024-12-14 16:49:49.297757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.447 [2024-12-14 16:49:49.297764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.447 [2024-12-14 16:49:49.297770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.447 [2024-12-14 16:49:49.297785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.447 qpair failed and we were unable to recover it. 00:36:19.447 [2024-12-14 16:49:49.307699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.307766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.307779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.307785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.307791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.307807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.317789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.317843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.317857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.317864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.317870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.317889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.327823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.327877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.327890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.327896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.327902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.327917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.337901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.337962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.337975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.337982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.337988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.338003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.347811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.347868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.347881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.347889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.347896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.347911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.357827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.357903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.357916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.357923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.357929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.357944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.367965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.368032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.368045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.368053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.368059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.368074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.377908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.377964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.377977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.377985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.377991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.378005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.387980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.388053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.388067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.388073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.388079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.388094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.398008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.398058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.398071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.398078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.398085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.398100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.408070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.408126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.408142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.408150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.408156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.408171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.418069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.418168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.418181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.418188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.418194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.418209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.428040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.428093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.428106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.428112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.448 [2024-12-14 16:49:49.428119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.448 [2024-12-14 16:49:49.428134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.448 qpair failed and we were unable to recover it. 00:36:19.448 [2024-12-14 16:49:49.438126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.448 [2024-12-14 16:49:49.438180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.448 [2024-12-14 16:49:49.438194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.448 [2024-12-14 16:49:49.438201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.449 [2024-12-14 16:49:49.438207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.449 [2024-12-14 16:49:49.438222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.449 qpair failed and we were unable to recover it. 00:36:19.449 [2024-12-14 16:49:49.448093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.449 [2024-12-14 16:49:49.448150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.449 [2024-12-14 16:49:49.448164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.449 [2024-12-14 16:49:49.448171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.449 [2024-12-14 16:49:49.448180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.449 [2024-12-14 16:49:49.448195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.449 qpair failed and we were unable to recover it. 00:36:19.449 [2024-12-14 16:49:49.458190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.449 [2024-12-14 16:49:49.458240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.449 [2024-12-14 16:49:49.458254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.449 [2024-12-14 16:49:49.458261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.449 [2024-12-14 16:49:49.458267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.449 [2024-12-14 16:49:49.458282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.449 qpair failed and we were unable to recover it. 00:36:19.449 [2024-12-14 16:49:49.468209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.449 [2024-12-14 16:49:49.468264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.449 [2024-12-14 16:49:49.468278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.449 [2024-12-14 16:49:49.468284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.449 [2024-12-14 16:49:49.468291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.449 [2024-12-14 16:49:49.468306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.449 qpair failed and we were unable to recover it. 00:36:19.449 [2024-12-14 16:49:49.478185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.449 [2024-12-14 16:49:49.478234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.449 [2024-12-14 16:49:49.478248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.449 [2024-12-14 16:49:49.478255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.449 [2024-12-14 16:49:49.478261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.449 [2024-12-14 16:49:49.478276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.449 qpair failed and we were unable to recover it. 00:36:19.449 [2024-12-14 16:49:49.488321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.449 [2024-12-14 16:49:49.488402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.449 [2024-12-14 16:49:49.488416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.449 [2024-12-14 16:49:49.488423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.449 [2024-12-14 16:49:49.488429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.449 [2024-12-14 16:49:49.488444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.449 qpair failed and we were unable to recover it. 00:36:19.449 [2024-12-14 16:49:49.498235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.449 [2024-12-14 16:49:49.498292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.449 [2024-12-14 16:49:49.498306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.449 [2024-12-14 16:49:49.498312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.449 [2024-12-14 16:49:49.498318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.449 [2024-12-14 16:49:49.498333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.449 qpair failed and we were unable to recover it. 00:36:19.449 [2024-12-14 16:49:49.508263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.449 [2024-12-14 16:49:49.508319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.449 [2024-12-14 16:49:49.508332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.449 [2024-12-14 16:49:49.508339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.449 [2024-12-14 16:49:49.508346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.449 [2024-12-14 16:49:49.508361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.449 qpair failed and we were unable to recover it. 00:36:19.449 [2024-12-14 16:49:49.518349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.449 [2024-12-14 16:49:49.518405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.449 [2024-12-14 16:49:49.518419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.449 [2024-12-14 16:49:49.518426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.449 [2024-12-14 16:49:49.518432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.449 [2024-12-14 16:49:49.518447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.449 qpair failed and we were unable to recover it. 00:36:19.449 [2024-12-14 16:49:49.528384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.449 [2024-12-14 16:49:49.528440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.449 [2024-12-14 16:49:49.528453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.449 [2024-12-14 16:49:49.528460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.449 [2024-12-14 16:49:49.528466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.449 [2024-12-14 16:49:49.528481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.449 qpair failed and we were unable to recover it. 00:36:19.708 [2024-12-14 16:49:49.538338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.708 [2024-12-14 16:49:49.538398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.708 [2024-12-14 16:49:49.538414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.708 [2024-12-14 16:49:49.538422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.708 [2024-12-14 16:49:49.538428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.708 [2024-12-14 16:49:49.538442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.708 qpair failed and we were unable to recover it. 00:36:19.708 [2024-12-14 16:49:49.548453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.708 [2024-12-14 16:49:49.548508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.708 [2024-12-14 16:49:49.548522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.708 [2024-12-14 16:49:49.548528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.708 [2024-12-14 16:49:49.548535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.708 [2024-12-14 16:49:49.548550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.708 qpair failed and we were unable to recover it. 00:36:19.708 [2024-12-14 16:49:49.558465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.708 [2024-12-14 16:49:49.558517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.708 [2024-12-14 16:49:49.558531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.708 [2024-12-14 16:49:49.558538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.708 [2024-12-14 16:49:49.558545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.708 [2024-12-14 16:49:49.558566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.708 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.568495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.568550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.568567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.568574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.568580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.568595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.578532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.578594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.578608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.578615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.578624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.578639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.588489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.588543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.588560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.588567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.588573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.588589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.598591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.598648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.598661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.598668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.598675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.598690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.608639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.608695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.608708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.608715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.608721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.608736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.618654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.618711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.618724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.618730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.618736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.618751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.628611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.628662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.628676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.628682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.628688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.628704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.638742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.638798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.638812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.638819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.638825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.638839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.648756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.648814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.648827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.648834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.648840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.648855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.658816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.658889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.658902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.658909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.658915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.658930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.668791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.668843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.668859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.668866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.668873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.668888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.678829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.678885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.678898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.678905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.678912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.678926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.688815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.688902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.709 [2024-12-14 16:49:49.688915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.709 [2024-12-14 16:49:49.688922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.709 [2024-12-14 16:49:49.688928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.709 [2024-12-14 16:49:49.688943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.709 qpair failed and we were unable to recover it. 00:36:19.709 [2024-12-14 16:49:49.698888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.709 [2024-12-14 16:49:49.698944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.710 [2024-12-14 16:49:49.698957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.710 [2024-12-14 16:49:49.698964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.710 [2024-12-14 16:49:49.698971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.710 [2024-12-14 16:49:49.698985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.710 qpair failed and we were unable to recover it. 00:36:19.710 [2024-12-14 16:49:49.708967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.710 [2024-12-14 16:49:49.709031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.710 [2024-12-14 16:49:49.709044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.710 [2024-12-14 16:49:49.709054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.710 [2024-12-14 16:49:49.709061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.710 [2024-12-14 16:49:49.709075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.710 qpair failed and we were unable to recover it. 00:36:19.710 [2024-12-14 16:49:49.718944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.710 [2024-12-14 16:49:49.718993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.710 [2024-12-14 16:49:49.719006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.710 [2024-12-14 16:49:49.719013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.710 [2024-12-14 16:49:49.719019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.710 [2024-12-14 16:49:49.719034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.710 qpair failed and we were unable to recover it. 00:36:19.710 [2024-12-14 16:49:49.728980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.710 [2024-12-14 16:49:49.729035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.710 [2024-12-14 16:49:49.729049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.710 [2024-12-14 16:49:49.729055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.710 [2024-12-14 16:49:49.729062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.710 [2024-12-14 16:49:49.729077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.710 qpair failed and we were unable to recover it. 00:36:19.710 [2024-12-14 16:49:49.738997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.710 [2024-12-14 16:49:49.739054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.710 [2024-12-14 16:49:49.739067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.710 [2024-12-14 16:49:49.739074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.710 [2024-12-14 16:49:49.739080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.710 [2024-12-14 16:49:49.739095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.710 qpair failed and we were unable to recover it. 00:36:19.710 [2024-12-14 16:49:49.749030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.710 [2024-12-14 16:49:49.749081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.710 [2024-12-14 16:49:49.749095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.710 [2024-12-14 16:49:49.749102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.710 [2024-12-14 16:49:49.749107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.710 [2024-12-14 16:49:49.749125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.710 qpair failed and we were unable to recover it. 00:36:19.710 [2024-12-14 16:49:49.759108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.710 [2024-12-14 16:49:49.759200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.710 [2024-12-14 16:49:49.759213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.710 [2024-12-14 16:49:49.759220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.710 [2024-12-14 16:49:49.759226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.710 [2024-12-14 16:49:49.759241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.710 qpair failed and we were unable to recover it. 00:36:19.710 [2024-12-14 16:49:49.769109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.710 [2024-12-14 16:49:49.769168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.710 [2024-12-14 16:49:49.769181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.710 [2024-12-14 16:49:49.769189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.710 [2024-12-14 16:49:49.769195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.710 [2024-12-14 16:49:49.769209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.710 qpair failed and we were unable to recover it. 00:36:19.710 [2024-12-14 16:49:49.779121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.710 [2024-12-14 16:49:49.779175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.710 [2024-12-14 16:49:49.779188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.710 [2024-12-14 16:49:49.779194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.710 [2024-12-14 16:49:49.779201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.710 [2024-12-14 16:49:49.779215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.710 qpair failed and we were unable to recover it. 00:36:19.710 [2024-12-14 16:49:49.789165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.710 [2024-12-14 16:49:49.789223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.710 [2024-12-14 16:49:49.789237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.710 [2024-12-14 16:49:49.789244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.710 [2024-12-14 16:49:49.789250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.710 [2024-12-14 16:49:49.789265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.710 qpair failed and we were unable to recover it. 00:36:19.969 [2024-12-14 16:49:49.799160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.969 [2024-12-14 16:49:49.799219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.969 [2024-12-14 16:49:49.799232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.969 [2024-12-14 16:49:49.799239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.969 [2024-12-14 16:49:49.799245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.969 [2024-12-14 16:49:49.799260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.969 qpair failed and we were unable to recover it. 00:36:19.969 [2024-12-14 16:49:49.809195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.969 [2024-12-14 16:49:49.809251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.969 [2024-12-14 16:49:49.809264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.969 [2024-12-14 16:49:49.809271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.969 [2024-12-14 16:49:49.809277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.969 [2024-12-14 16:49:49.809292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.969 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.819226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.819281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.819294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.819301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.819307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.819322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.829283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.829339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.829352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.829359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.829366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.829381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.839293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.839350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.839364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.839374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.839381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.839396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.849322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.849381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.849394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.849401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.849408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.849423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.859338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.859428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.859441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.859448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.859455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.859469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.869372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.869424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.869438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.869444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.869451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.869466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.879379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.879433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.879447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.879454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.879461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.879480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.889425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.889482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.889496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.889502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.889509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.889524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.899452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.899506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.899519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.899526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.899532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.899547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.909486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.909542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.909558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.909565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.909571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.909587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.919501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.919560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.919574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.919581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.919587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.919602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.929544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.929605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.929619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.929626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.929633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.929648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.939578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.970 [2024-12-14 16:49:49.939634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.970 [2024-12-14 16:49:49.939647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.970 [2024-12-14 16:49:49.939654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.970 [2024-12-14 16:49:49.939661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.970 [2024-12-14 16:49:49.939676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.970 qpair failed and we were unable to recover it. 00:36:19.970 [2024-12-14 16:49:49.949604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.971 [2024-12-14 16:49:49.949659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.971 [2024-12-14 16:49:49.949672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.971 [2024-12-14 16:49:49.949679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.971 [2024-12-14 16:49:49.949686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.971 [2024-12-14 16:49:49.949701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.971 qpair failed and we were unable to recover it. 00:36:19.971 [2024-12-14 16:49:49.959620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.971 [2024-12-14 16:49:49.959676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.971 [2024-12-14 16:49:49.959689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.971 [2024-12-14 16:49:49.959697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.971 [2024-12-14 16:49:49.959703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.971 [2024-12-14 16:49:49.959718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.971 qpair failed and we were unable to recover it. 00:36:19.971 [2024-12-14 16:49:49.969661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.971 [2024-12-14 16:49:49.969716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.971 [2024-12-14 16:49:49.969733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.971 [2024-12-14 16:49:49.969741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.971 [2024-12-14 16:49:49.969748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.971 [2024-12-14 16:49:49.969763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.971 qpair failed and we were unable to recover it. 00:36:19.971 [2024-12-14 16:49:49.979687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.971 [2024-12-14 16:49:49.979746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.971 [2024-12-14 16:49:49.979759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.971 [2024-12-14 16:49:49.979767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.971 [2024-12-14 16:49:49.979774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.971 [2024-12-14 16:49:49.979789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.971 qpair failed and we were unable to recover it. 00:36:19.971 [2024-12-14 16:49:49.989709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.971 [2024-12-14 16:49:49.989762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.971 [2024-12-14 16:49:49.989776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.971 [2024-12-14 16:49:49.989782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.971 [2024-12-14 16:49:49.989789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.971 [2024-12-14 16:49:49.989803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.971 qpair failed and we were unable to recover it. 00:36:19.971 [2024-12-14 16:49:49.999735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.971 [2024-12-14 16:49:49.999790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.971 [2024-12-14 16:49:49.999804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.971 [2024-12-14 16:49:49.999811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.971 [2024-12-14 16:49:49.999818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.971 [2024-12-14 16:49:49.999832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.971 qpair failed and we were unable to recover it. 00:36:19.971 [2024-12-14 16:49:50.010091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.971 [2024-12-14 16:49:50.010190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.971 [2024-12-14 16:49:50.010215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.971 [2024-12-14 16:49:50.010235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.971 [2024-12-14 16:49:50.010277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.971 [2024-12-14 16:49:50.010310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.971 qpair failed and we were unable to recover it. 00:36:19.971 [2024-12-14 16:49:50.019826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.971 [2024-12-14 16:49:50.019884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.971 [2024-12-14 16:49:50.019900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.971 [2024-12-14 16:49:50.019908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.971 [2024-12-14 16:49:50.019916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.971 [2024-12-14 16:49:50.019933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.971 qpair failed and we were unable to recover it. 00:36:19.971 [2024-12-14 16:49:50.029905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.971 [2024-12-14 16:49:50.029978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.971 [2024-12-14 16:49:50.029996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.971 [2024-12-14 16:49:50.030006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.971 [2024-12-14 16:49:50.030016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.971 [2024-12-14 16:49:50.030036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.971 qpair failed and we were unable to recover it. 00:36:19.971 [2024-12-14 16:49:50.039889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.971 [2024-12-14 16:49:50.039949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.971 [2024-12-14 16:49:50.039966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.971 [2024-12-14 16:49:50.039975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.971 [2024-12-14 16:49:50.039982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.971 [2024-12-14 16:49:50.040000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.971 qpair failed and we were unable to recover it. 00:36:19.971 [2024-12-14 16:49:50.049907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.971 [2024-12-14 16:49:50.049962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.971 [2024-12-14 16:49:50.049977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.971 [2024-12-14 16:49:50.049985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.971 [2024-12-14 16:49:50.049991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:19.971 [2024-12-14 16:49:50.050007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.971 qpair failed and we were unable to recover it. 00:36:20.230 [2024-12-14 16:49:50.059930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.230 [2024-12-14 16:49:50.059986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.230 [2024-12-14 16:49:50.060000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.230 [2024-12-14 16:49:50.060007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.230 [2024-12-14 16:49:50.060013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.230 [2024-12-14 16:49:50.060029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.230 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.070039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.070151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.070171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.070180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.070189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.070207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.079954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.080012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.080028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.080037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.080044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.080061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.090006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.090091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.090105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.090112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.090118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.090134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.100054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.100113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.100131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.100139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.100145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.100160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.110069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.110124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.110137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.110144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.110151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.110166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.120094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.120151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.120165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.120172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.120179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.120193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.130176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.130236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.130250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.130258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.130264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.130279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.140158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.140216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.140230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.140238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.140247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.140263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.150186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.150245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.150259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.150266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.150273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.150288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.160225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.160276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.160290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.160297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.160304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.160319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.170192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.170274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.170289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.170297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.170305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.170321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.180277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.180374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.180388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.180395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.180401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.180416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.190249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.231 [2024-12-14 16:49:50.190349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.231 [2024-12-14 16:49:50.190363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.231 [2024-12-14 16:49:50.190370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.231 [2024-12-14 16:49:50.190376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.231 [2024-12-14 16:49:50.190391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.231 qpair failed and we were unable to recover it. 00:36:20.231 [2024-12-14 16:49:50.200373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.200427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.200441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.200447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.200454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.200469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.232 [2024-12-14 16:49:50.210387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.210444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.210457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.210464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.210471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.210486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.232 [2024-12-14 16:49:50.220401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.220458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.220472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.220479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.220485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.220500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.232 [2024-12-14 16:49:50.230468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.230523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.230540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.230547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.230553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.230574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.232 [2024-12-14 16:49:50.240449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.240504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.240516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.240523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.240530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.240544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.232 [2024-12-14 16:49:50.250421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.250473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.250487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.250494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.250500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.250516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.232 [2024-12-14 16:49:50.260511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.260566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.260580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.260586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.260593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.260608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.232 [2024-12-14 16:49:50.270547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.270607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.270621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.270631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.270637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.270652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.232 [2024-12-14 16:49:50.280614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.280663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.280676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.280683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.280690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.280705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.232 [2024-12-14 16:49:50.290613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.290682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.290695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.290702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.290709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.290723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.232 [2024-12-14 16:49:50.300628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.300683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.300696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.300703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.300710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.300725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.232 [2024-12-14 16:49:50.310644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.232 [2024-12-14 16:49:50.310701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.232 [2024-12-14 16:49:50.310714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.232 [2024-12-14 16:49:50.310722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.232 [2024-12-14 16:49:50.310728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.232 [2024-12-14 16:49:50.310747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.232 qpair failed and we were unable to recover it. 00:36:20.491 [2024-12-14 16:49:50.320683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.320747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.320760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.320768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.320774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.320789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.330742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.330850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.330864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.330871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.330877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.330892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.340683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.340736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.340750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.340757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.340764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.340779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.350758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.350813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.350827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.350834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.350840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.350855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.360750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.360800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.360813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.360820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.360826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.360841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.370773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.370829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.370842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.370849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.370856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.370870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.380801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.380859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.380873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.380880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.380887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.380901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.390816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.390883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.390897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.390904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.390911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.390926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.400830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.400885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.400898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.400909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.400915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.400929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.410891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.410948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.410961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.410968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.410975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.410989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.420982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.421074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.421087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.421094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.421100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.421115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.431053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.431112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.431126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.431133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.431139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.431154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.440999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.492 [2024-12-14 16:49:50.441055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.492 [2024-12-14 16:49:50.441068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.492 [2024-12-14 16:49:50.441075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.492 [2024-12-14 16:49:50.441081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.492 [2024-12-14 16:49:50.441099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.492 qpair failed and we were unable to recover it. 00:36:20.492 [2024-12-14 16:49:50.451091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.451148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.451162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.451169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.451175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.451189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.461024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.461103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.461117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.461124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.461130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.461145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.471114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.471171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.471184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.471191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.471197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.471212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.481134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.481185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.481199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.481206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.481212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.481227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.491115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.491170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.491184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.491191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.491198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.491213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.501197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.501252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.501265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.501272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.501278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.501292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.511213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.511270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.511284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.511290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.511297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.511311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.521237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.521287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.521300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.521307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.521313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.521328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.531290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.531346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.531363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.531370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.531376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.531391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.541234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.541287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.541300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.541307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.541313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.541328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.551330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.551386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.551400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.551407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.551413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.551428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.561346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.561397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.561410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.561417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.561424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.561438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.493 [2024-12-14 16:49:50.571397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.493 [2024-12-14 16:49:50.571468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.493 [2024-12-14 16:49:50.571481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.493 [2024-12-14 16:49:50.571488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.493 [2024-12-14 16:49:50.571500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.493 [2024-12-14 16:49:50.571515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.493 qpair failed and we were unable to recover it. 00:36:20.753 [2024-12-14 16:49:50.581407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.753 [2024-12-14 16:49:50.581462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.753 [2024-12-14 16:49:50.581476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.753 [2024-12-14 16:49:50.581482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.753 [2024-12-14 16:49:50.581488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.753 [2024-12-14 16:49:50.581503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.753 qpair failed and we were unable to recover it. 00:36:20.753 [2024-12-14 16:49:50.591455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.753 [2024-12-14 16:49:50.591524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.753 [2024-12-14 16:49:50.591538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.753 [2024-12-14 16:49:50.591544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.753 [2024-12-14 16:49:50.591551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.753 [2024-12-14 16:49:50.591568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.753 qpair failed and we were unable to recover it. 00:36:20.753 [2024-12-14 16:49:50.601466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.753 [2024-12-14 16:49:50.601521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.753 [2024-12-14 16:49:50.601534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.753 [2024-12-14 16:49:50.601541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.753 [2024-12-14 16:49:50.601547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.753 [2024-12-14 16:49:50.601566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.753 qpair failed and we were unable to recover it. 00:36:20.753 [2024-12-14 16:49:50.611499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.753 [2024-12-14 16:49:50.611559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.753 [2024-12-14 16:49:50.611573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.753 [2024-12-14 16:49:50.611580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.753 [2024-12-14 16:49:50.611587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.753 [2024-12-14 16:49:50.611602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.753 qpair failed and we were unable to recover it. 00:36:20.753 [2024-12-14 16:49:50.621527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.753 [2024-12-14 16:49:50.621583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.753 [2024-12-14 16:49:50.621597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.753 [2024-12-14 16:49:50.621604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.753 [2024-12-14 16:49:50.621610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.753 [2024-12-14 16:49:50.621625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.753 qpair failed and we were unable to recover it. 00:36:20.753 [2024-12-14 16:49:50.631552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.753 [2024-12-14 16:49:50.631614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.753 [2024-12-14 16:49:50.631628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.753 [2024-12-14 16:49:50.631635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.753 [2024-12-14 16:49:50.631641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.753 [2024-12-14 16:49:50.631656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.753 qpair failed and we were unable to recover it. 00:36:20.753 [2024-12-14 16:49:50.641617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.753 [2024-12-14 16:49:50.641696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.753 [2024-12-14 16:49:50.641710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.753 [2024-12-14 16:49:50.641717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.753 [2024-12-14 16:49:50.641723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.753 [2024-12-14 16:49:50.641738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.753 qpair failed and we were unable to recover it. 00:36:20.753 [2024-12-14 16:49:50.651614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.753 [2024-12-14 16:49:50.651672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.753 [2024-12-14 16:49:50.651686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.753 [2024-12-14 16:49:50.651693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.753 [2024-12-14 16:49:50.651699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.753 [2024-12-14 16:49:50.651714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.753 qpair failed and we were unable to recover it. 00:36:20.753 [2024-12-14 16:49:50.661645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.753 [2024-12-14 16:49:50.661701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.753 [2024-12-14 16:49:50.661717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.753 [2024-12-14 16:49:50.661724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.753 [2024-12-14 16:49:50.661731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.753 [2024-12-14 16:49:50.661746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.753 qpair failed and we were unable to recover it. 00:36:20.753 [2024-12-14 16:49:50.671654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.753 [2024-12-14 16:49:50.671707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.753 [2024-12-14 16:49:50.671720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.753 [2024-12-14 16:49:50.671726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.753 [2024-12-14 16:49:50.671732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.753 [2024-12-14 16:49:50.671747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.753 qpair failed and we were unable to recover it. 00:36:20.753 [2024-12-14 16:49:50.681722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.753 [2024-12-14 16:49:50.681784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.753 [2024-12-14 16:49:50.681797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.753 [2024-12-14 16:49:50.681803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.753 [2024-12-14 16:49:50.681810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.753 [2024-12-14 16:49:50.681824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.753 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.691754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.691818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.691831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.691838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.691844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.691858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.701696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.701764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.701778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.701784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.701794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.701809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.711806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.711862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.711875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.711882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.711888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.711903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.721818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.721898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.721911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.721918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.721924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.721938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.731799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.731879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.731892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.731898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.731905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.731920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.741878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.741938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.741952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.741958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.741965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.741979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.751843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.751894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.751907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.751915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.751921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.751936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.761905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.761955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.761968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.761974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.761980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.761995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.771959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.772014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.772027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.772034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.772040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.772056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.782012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.782080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.782093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.782100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.782106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.782121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.791971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.792018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.792034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.792041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.792048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.792063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.802069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.802129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.802141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.802148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.802154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.802169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.812109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.754 [2024-12-14 16:49:50.812167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.754 [2024-12-14 16:49:50.812190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.754 [2024-12-14 16:49:50.812198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.754 [2024-12-14 16:49:50.812204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.754 [2024-12-14 16:49:50.812224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.754 qpair failed and we were unable to recover it. 00:36:20.754 [2024-12-14 16:49:50.822051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.755 [2024-12-14 16:49:50.822117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.755 [2024-12-14 16:49:50.822130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.755 [2024-12-14 16:49:50.822138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.755 [2024-12-14 16:49:50.822143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.755 [2024-12-14 16:49:50.822158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.755 qpair failed and we were unable to recover it. 00:36:20.755 [2024-12-14 16:49:50.832110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:20.755 [2024-12-14 16:49:50.832164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:20.755 [2024-12-14 16:49:50.832177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:20.755 [2024-12-14 16:49:50.832188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:20.755 [2024-12-14 16:49:50.832194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:20.755 [2024-12-14 16:49:50.832210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:20.755 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.842089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.842153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.842166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.842173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.842180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.842195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.852136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.852191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.852205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.852212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.852218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.852233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.862146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.862212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.862225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.862232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.862238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.862254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.872196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.872267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.872280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.872287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.872294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.872312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.882220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.882277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.882290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.882297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.882303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.882318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.892247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.892302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.892315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.892322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.892328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.892343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.902376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.902427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.902441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.902447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.902454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.902469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.912381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.912431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.912444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.912451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.912457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.912472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.922431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.922488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.922501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.922508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.922514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.922529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.932360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.932418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.932432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.932439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.932445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.932459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.942464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.942517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.942530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.942537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.942543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.942563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.952425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.952475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.952489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.952496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.952502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.014 [2024-12-14 16:49:50.952517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.014 qpair failed and we were unable to recover it. 00:36:21.014 [2024-12-14 16:49:50.962544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.014 [2024-12-14 16:49:50.962623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.014 [2024-12-14 16:49:50.962637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.014 [2024-12-14 16:49:50.962648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.014 [2024-12-14 16:49:50.962654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:50.962669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:50.972599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:50.972700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:50.972714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:50.972721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:50.972727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:50.972742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:50.982575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:50.982630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:50.982643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:50.982650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:50.982656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:50.982671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:50.992590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:50.992647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:50.992659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:50.992666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:50.992673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:50.992688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:51.002618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:51.002670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:51.002683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:51.002690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:51.002696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:51.002715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:51.012675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:51.012739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:51.012752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:51.012760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:51.012765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:51.012781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:51.022613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:51.022674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:51.022687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:51.022694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:51.022700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:51.022715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:51.032714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:51.032772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:51.032785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:51.032792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:51.032799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:51.032813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:51.042690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:51.042744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:51.042757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:51.042764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:51.042770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:51.042784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:51.052780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:51.052866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:51.052880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:51.052886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:51.052892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:51.052907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:51.062743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:51.062803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:51.062816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:51.062823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:51.062830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:51.062843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:51.072832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:51.072885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:51.072898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:51.072904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:51.072911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:51.072925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:51.082863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:51.082920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:51.082933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:51.082940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:51.082946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:51.082961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.015 [2024-12-14 16:49:51.092839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.015 [2024-12-14 16:49:51.092894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.015 [2024-12-14 16:49:51.092910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.015 [2024-12-14 16:49:51.092917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.015 [2024-12-14 16:49:51.092923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.015 [2024-12-14 16:49:51.092938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.015 qpair failed and we were unable to recover it. 00:36:21.274 [2024-12-14 16:49:51.102926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.274 [2024-12-14 16:49:51.102980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.274 [2024-12-14 16:49:51.102993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.274 [2024-12-14 16:49:51.103000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.274 [2024-12-14 16:49:51.103006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.274 [2024-12-14 16:49:51.103020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.274 qpair failed and we were unable to recover it. 00:36:21.274 [2024-12-14 16:49:51.112989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.274 [2024-12-14 16:49:51.113067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.274 [2024-12-14 16:49:51.113080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.274 [2024-12-14 16:49:51.113086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.274 [2024-12-14 16:49:51.113093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.274 [2024-12-14 16:49:51.113107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.274 qpair failed and we were unable to recover it. 00:36:21.274 [2024-12-14 16:49:51.122995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.274 [2024-12-14 16:49:51.123049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.274 [2024-12-14 16:49:51.123062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.274 [2024-12-14 16:49:51.123069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.274 [2024-12-14 16:49:51.123075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.274 [2024-12-14 16:49:51.123090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.274 qpair failed and we were unable to recover it. 00:36:21.274 [2024-12-14 16:49:51.132993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.274 [2024-12-14 16:49:51.133090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.274 [2024-12-14 16:49:51.133103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.274 [2024-12-14 16:49:51.133110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.274 [2024-12-14 16:49:51.133120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.274 [2024-12-14 16:49:51.133134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.274 qpair failed and we were unable to recover it. 00:36:21.274 [2024-12-14 16:49:51.143092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.274 [2024-12-14 16:49:51.143147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.274 [2024-12-14 16:49:51.143160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.274 [2024-12-14 16:49:51.143167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.274 [2024-12-14 16:49:51.143173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.275 [2024-12-14 16:49:51.143188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.153082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.153149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.153163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.153170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.153176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.275 [2024-12-14 16:49:51.153191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.163115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.163175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.163188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.163196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.163202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.275 [2024-12-14 16:49:51.163216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.173137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.173192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.173205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.173212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.173218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.275 [2024-12-14 16:49:51.173233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.183184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.183238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.183252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.183259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.183265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.275 [2024-12-14 16:49:51.183280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.193115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.193166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.193180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.193187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.193193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.275 [2024-12-14 16:49:51.193208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.203218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.203307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.203320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.203327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.203333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.275 [2024-12-14 16:49:51.203348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.213182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.213240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.213252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.213259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.213266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.275 [2024-12-14 16:49:51.213281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.223274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.223329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.223345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.223352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.223358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.275 [2024-12-14 16:49:51.223373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.233296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.233345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.233358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.233365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.233371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.275 [2024-12-14 16:49:51.233387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.243348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.243427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.243440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.243447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.243454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf0000b90 00:36:21.275 [2024-12-14 16:49:51.243468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.253376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.253480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.253535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.253573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.253595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf8000b90 00:36:21.275 [2024-12-14 16:49:51.253648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.263400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.263509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.263573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.263599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.263628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbebcd0 00:36:21.275 [2024-12-14 16:49:51.263678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.273408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.275 [2024-12-14 16:49:51.273480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.275 [2024-12-14 16:49:51.273507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.275 [2024-12-14 16:49:51.273521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.275 [2024-12-14 16:49:51.273534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbebcd0 00:36:21.275 [2024-12-14 16:49:51.273571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:21.275 qpair failed and we were unable to recover it. 00:36:21.275 [2024-12-14 16:49:51.283502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.276 [2024-12-14 16:49:51.283611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.276 [2024-12-14 16:49:51.283666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.276 [2024-12-14 16:49:51.283691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.276 [2024-12-14 16:49:51.283712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedec000b90 00:36:21.276 [2024-12-14 16:49:51.283764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:21.276 qpair failed and we were unable to recover it. 00:36:21.276 [2024-12-14 16:49:51.293501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.276 [2024-12-14 16:49:51.293590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.276 [2024-12-14 16:49:51.293618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.276 [2024-12-14 16:49:51.293632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.276 [2024-12-14 16:49:51.293645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedec000b90 00:36:21.276 [2024-12-14 16:49:51.293677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:21.276 qpair failed and we were unable to recover it. 00:36:21.276 [2024-12-14 16:49:51.293790] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:21.276 A controller has encountered a failure and is being reset. 00:36:21.276 [2024-12-14 16:49:51.303541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:21.276 [2024-12-14 16:49:51.303638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:21.276 [2024-12-14 16:49:51.303683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:21.276 [2024-12-14 16:49:51.303706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:21.276 [2024-12-14 16:49:51.303735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fedf8000b90 00:36:21.276 [2024-12-14 16:49:51.303783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:21.276 qpair failed and we were unable to recover it. 00:36:21.534 Controller properly reset. 00:36:21.534 Initializing NVMe Controllers 00:36:21.534 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:21.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:21.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:21.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:21.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:21.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:21.534 Initialization complete. Launching workers. 00:36:21.534 Starting thread on core 1 00:36:21.534 Starting thread on core 2 00:36:21.534 Starting thread on core 3 00:36:21.534 Starting thread on core 0 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:21.534 00:36:21.534 real 0m10.885s 00:36:21.534 user 0m19.495s 00:36:21.534 sys 0m4.759s 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:21.534 ************************************ 00:36:21.534 END TEST nvmf_target_disconnect_tc2 00:36:21.534 ************************************ 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:21.534 rmmod nvme_tcp 00:36:21.534 rmmod nvme_fabrics 00:36:21.534 rmmod nvme_keyring 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1209823 ']' 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1209823 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1209823 ']' 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1209823 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:21.534 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1209823 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1209823' 00:36:21.793 killing process with pid 1209823 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1209823 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1209823 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:21.793 16:49:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.328 16:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:24.328 00:36:24.328 real 0m19.636s 00:36:24.328 user 0m47.691s 00:36:24.328 sys 0m9.613s 00:36:24.328 16:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.328 16:49:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:24.328 ************************************ 00:36:24.328 END TEST nvmf_target_disconnect 00:36:24.328 ************************************ 00:36:24.328 16:49:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:24.328 00:36:24.328 real 7m22.345s 00:36:24.328 user 16m52.925s 00:36:24.328 sys 2m8.248s 00:36:24.328 16:49:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.328 16:49:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.328 ************************************ 00:36:24.328 END TEST nvmf_host 00:36:24.328 ************************************ 00:36:24.328 16:49:53 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:24.328 16:49:53 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:24.328 16:49:53 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:24.328 16:49:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:24.328 16:49:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.328 16:49:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:24.328 ************************************ 00:36:24.328 START TEST nvmf_target_core_interrupt_mode 00:36:24.328 ************************************ 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:24.328 * Looking for test storage... 00:36:24.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.328 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:24.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.329 --rc genhtml_branch_coverage=1 00:36:24.329 --rc genhtml_function_coverage=1 00:36:24.329 --rc genhtml_legend=1 00:36:24.329 --rc geninfo_all_blocks=1 00:36:24.329 --rc geninfo_unexecuted_blocks=1 00:36:24.329 00:36:24.329 ' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:24.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.329 --rc genhtml_branch_coverage=1 00:36:24.329 --rc genhtml_function_coverage=1 00:36:24.329 --rc genhtml_legend=1 00:36:24.329 --rc geninfo_all_blocks=1 00:36:24.329 --rc geninfo_unexecuted_blocks=1 00:36:24.329 00:36:24.329 ' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:24.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.329 --rc genhtml_branch_coverage=1 00:36:24.329 --rc genhtml_function_coverage=1 00:36:24.329 --rc genhtml_legend=1 00:36:24.329 --rc geninfo_all_blocks=1 00:36:24.329 --rc geninfo_unexecuted_blocks=1 00:36:24.329 00:36:24.329 ' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:24.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.329 --rc genhtml_branch_coverage=1 00:36:24.329 --rc genhtml_function_coverage=1 00:36:24.329 --rc genhtml_legend=1 00:36:24.329 --rc geninfo_all_blocks=1 00:36:24.329 --rc geninfo_unexecuted_blocks=1 00:36:24.329 00:36:24.329 ' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:24.329 ************************************ 00:36:24.329 START TEST nvmf_abort 00:36:24.329 ************************************ 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:24.329 * Looking for test storage... 00:36:24.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:24.329 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.589 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:24.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.590 --rc genhtml_branch_coverage=1 00:36:24.590 --rc genhtml_function_coverage=1 00:36:24.590 --rc genhtml_legend=1 00:36:24.590 --rc geninfo_all_blocks=1 00:36:24.590 --rc geninfo_unexecuted_blocks=1 00:36:24.590 00:36:24.590 ' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:24.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.590 --rc genhtml_branch_coverage=1 00:36:24.590 --rc genhtml_function_coverage=1 00:36:24.590 --rc genhtml_legend=1 00:36:24.590 --rc geninfo_all_blocks=1 00:36:24.590 --rc geninfo_unexecuted_blocks=1 00:36:24.590 00:36:24.590 ' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:24.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.590 --rc genhtml_branch_coverage=1 00:36:24.590 --rc genhtml_function_coverage=1 00:36:24.590 --rc genhtml_legend=1 00:36:24.590 --rc geninfo_all_blocks=1 00:36:24.590 --rc geninfo_unexecuted_blocks=1 00:36:24.590 00:36:24.590 ' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:24.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.590 --rc genhtml_branch_coverage=1 00:36:24.590 --rc genhtml_function_coverage=1 00:36:24.590 --rc genhtml_legend=1 00:36:24.590 --rc geninfo_all_blocks=1 00:36:24.590 --rc geninfo_unexecuted_blocks=1 00:36:24.590 00:36:24.590 ' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:24.590 16:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:31.160 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:31.161 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:31.161 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:31.161 Found net devices under 0000:af:00.0: cvl_0_0 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:31.161 Found net devices under 0000:af:00.1: cvl_0_1 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:31.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:31.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:36:31.161 00:36:31.161 --- 10.0.0.2 ping statistics --- 00:36:31.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.161 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:31.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:31.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:36:31.161 00:36:31.161 --- 10.0.0.1 ping statistics --- 00:36:31.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.161 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1214423 00:36:31.161 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1214423 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1214423 ']' 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.162 [2024-12-14 16:50:00.423368] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:31.162 [2024-12-14 16:50:00.424301] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:31.162 [2024-12-14 16:50:00.424334] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.162 [2024-12-14 16:50:00.502946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:31.162 [2024-12-14 16:50:00.524729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:31.162 [2024-12-14 16:50:00.524764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:31.162 [2024-12-14 16:50:00.524772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:31.162 [2024-12-14 16:50:00.524777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:31.162 [2024-12-14 16:50:00.524783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:31.162 [2024-12-14 16:50:00.526061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:31.162 [2024-12-14 16:50:00.526169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:31.162 [2024-12-14 16:50:00.526170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:31.162 [2024-12-14 16:50:00.588048] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:31.162 [2024-12-14 16:50:00.588908] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:31.162 [2024-12-14 16:50:00.589127] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:31.162 [2024-12-14 16:50:00.589280] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.162 [2024-12-14 16:50:00.655002] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.162 Malloc0 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.162 Delay0 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.162 [2024-12-14 16:50:00.742896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.162 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:31.162 [2024-12-14 16:50:00.908636] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:33.065 Initializing NVMe Controllers 00:36:33.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:33.065 controller IO queue size 128 less than required 00:36:33.065 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:33.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:33.065 Initialization complete. Launching workers. 00:36:33.065 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37849 00:36:33.066 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37906, failed to submit 66 00:36:33.066 success 37849, unsuccessful 57, failed 0 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:33.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:33.066 rmmod nvme_tcp 00:36:33.066 rmmod nvme_fabrics 00:36:33.066 rmmod nvme_keyring 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1214423 ']' 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1214423 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1214423 ']' 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1214423 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1214423 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1214423' 00:36:33.324 killing process with pid 1214423 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1214423 00:36:33.324 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1214423 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:33.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:35.858 00:36:35.858 real 0m11.197s 00:36:35.858 user 0m10.816s 00:36:35.858 sys 0m5.757s 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.858 ************************************ 00:36:35.858 END TEST nvmf_abort 00:36:35.858 ************************************ 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:35.858 ************************************ 00:36:35.858 START TEST nvmf_ns_hotplug_stress 00:36:35.858 ************************************ 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:35.858 * Looking for test storage... 00:36:35.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.858 --rc genhtml_branch_coverage=1 00:36:35.858 --rc genhtml_function_coverage=1 00:36:35.858 --rc genhtml_legend=1 00:36:35.858 --rc geninfo_all_blocks=1 00:36:35.858 --rc geninfo_unexecuted_blocks=1 00:36:35.858 00:36:35.858 ' 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.858 --rc genhtml_branch_coverage=1 00:36:35.858 --rc genhtml_function_coverage=1 00:36:35.858 --rc genhtml_legend=1 00:36:35.858 --rc geninfo_all_blocks=1 00:36:35.858 --rc geninfo_unexecuted_blocks=1 00:36:35.858 00:36:35.858 ' 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.858 --rc genhtml_branch_coverage=1 00:36:35.858 --rc genhtml_function_coverage=1 00:36:35.858 --rc genhtml_legend=1 00:36:35.858 --rc geninfo_all_blocks=1 00:36:35.858 --rc geninfo_unexecuted_blocks=1 00:36:35.858 00:36:35.858 ' 00:36:35.858 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:35.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.859 --rc genhtml_branch_coverage=1 00:36:35.859 --rc genhtml_function_coverage=1 00:36:35.859 --rc genhtml_legend=1 00:36:35.859 --rc geninfo_all_blocks=1 00:36:35.859 --rc geninfo_unexecuted_blocks=1 00:36:35.859 00:36:35.859 ' 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:35.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:42.426 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:42.426 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:42.426 Found net devices under 0000:af:00.0: cvl_0_0 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:42.426 Found net devices under 0000:af:00.1: cvl_0_1 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:42.426 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:42.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:42.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:36:42.427 00:36:42.427 --- 10.0.0.2 ping statistics --- 00:36:42.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.427 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:42.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:42.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:36:42.427 00:36:42.427 --- 10.0.0.1 ping statistics --- 00:36:42.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.427 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1218337 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1218337 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1218337 ']' 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:42.427 [2024-12-14 16:50:11.672330] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:42.427 [2024-12-14 16:50:11.673303] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:42.427 [2024-12-14 16:50:11.673340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:42.427 [2024-12-14 16:50:11.752615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:42.427 [2024-12-14 16:50:11.774354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:42.427 [2024-12-14 16:50:11.774389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:42.427 [2024-12-14 16:50:11.774396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:42.427 [2024-12-14 16:50:11.774401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:42.427 [2024-12-14 16:50:11.774406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:42.427 [2024-12-14 16:50:11.775636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:42.427 [2024-12-14 16:50:11.775746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.427 [2024-12-14 16:50:11.775747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:42.427 [2024-12-14 16:50:11.837143] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:42.427 [2024-12-14 16:50:11.837889] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:42.427 [2024-12-14 16:50:11.838591] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:42.427 [2024-12-14 16:50:11.838669] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:42.427 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:42.427 [2024-12-14 16:50:12.068399] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:42.427 16:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:42.427 16:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:42.427 [2024-12-14 16:50:12.472767] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:42.427 16:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:42.689 16:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:42.948 Malloc0 00:36:42.948 16:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:43.206 Delay0 00:36:43.206 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.206 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:43.464 NULL1 00:36:43.464 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:43.722 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1218591 00:36:43.722 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:43.722 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:43.722 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.097 Read completed with error (sct=0, sc=11) 00:36:45.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.097 16:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.097 16:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:45.097 16:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:45.354 true 00:36:45.354 16:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:45.354 16:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.287 16:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.287 16:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:46.287 16:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:46.545 true 00:36:46.545 16:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:46.545 16:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.803 16:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.060 16:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:47.060 16:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:47.060 true 00:36:47.060 16:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:47.060 16:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.434 16:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.434 16:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:48.434 16:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:48.691 true 00:36:48.691 16:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:48.691 16:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.949 16:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.949 16:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:48.949 16:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:49.207 true 00:36:49.207 16:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:49.207 16:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.141 16:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:50.398 16:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:50.398 16:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:50.656 true 00:36:50.656 16:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:50.656 16:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.915 16:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.915 16:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:50.915 16:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:51.172 true 00:36:51.172 16:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:51.173 16:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.610 16:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.610 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.610 16:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:52.610 16:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:52.610 true 00:36:52.610 16:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:52.610 16:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.868 16:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.126 16:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:53.126 16:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:53.126 true 00:36:53.384 16:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:53.384 16:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.317 16:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.575 16:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:54.575 16:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:54.833 true 00:36:54.833 16:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:54.833 16:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.766 16:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.766 16:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:55.766 16:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:56.024 true 00:36:56.024 16:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:56.024 16:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.282 16:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.282 16:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:56.282 16:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:56.540 true 00:36:56.540 16:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:56.540 16:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.915 16:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.916 16:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:57.916 16:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:58.173 true 00:36:58.173 16:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:58.173 16:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.107 16:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.107 16:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:59.107 16:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:59.365 true 00:36:59.365 16:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:59.365 16:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.622 16:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.880 16:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:59.880 16:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:59.880 true 00:36:59.880 16:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:36:59.880 16:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.253 16:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.253 16:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:37:01.253 16:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:37:01.511 true 00:37:01.511 16:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:01.511 16:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.445 16:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.445 16:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:02.445 16:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:02.702 true 00:37:02.702 16:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:02.702 16:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.960 16:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:03.218 16:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:03.218 16:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:03.218 true 00:37:03.218 16:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:03.218 16:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.590 16:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.590 16:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:04.590 16:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:04.850 true 00:37:04.850 16:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:04.850 16:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.512 16:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:05.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.769 16:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:05.769 16:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:06.027 true 00:37:06.027 16:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:06.027 16:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.284 16:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:06.284 16:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:06.284 16:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:06.542 true 00:37:06.542 16:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:06.542 16:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.915 16:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:07.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.915 16:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:07.915 16:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:08.172 true 00:37:08.172 16:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:08.172 16:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.104 16:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:09.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.104 16:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:09.104 16:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:09.362 true 00:37:09.362 16:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:09.362 16:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.620 16:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:09.620 16:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:09.620 16:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:09.878 true 00:37:09.878 16:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:09.878 16:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:11.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.250 16:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:11.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.250 16:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:11.250 16:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:11.508 true 00:37:11.508 16:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:11.508 16:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.442 16:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:12.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.442 16:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:12.442 16:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:12.700 true 00:37:12.700 16:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:12.700 16:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.957 16:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:13.215 16:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:13.215 16:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:13.215 true 00:37:13.473 16:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:13.473 16:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.406 Initializing NVMe Controllers 00:37:14.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:14.406 Controller IO queue size 128, less than required. 00:37:14.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:14.406 Controller IO queue size 128, less than required. 00:37:14.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:14.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:14.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:14.406 Initialization complete. Launching workers. 00:37:14.406 ======================================================== 00:37:14.406 Latency(us) 00:37:14.406 Device Information : IOPS MiB/s Average min max 00:37:14.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1943.60 0.95 44986.99 2185.47 1018825.25 00:37:14.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17950.20 8.76 7130.80 1655.11 368780.81 00:37:14.406 ======================================================== 00:37:14.406 Total : 19893.80 9.71 10829.31 1655.11 1018825.25 00:37:14.406 00:37:14.406 16:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.664 16:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:14.664 16:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:14.921 true 00:37:14.921 16:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218591 00:37:14.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1218591) - No such process 00:37:14.921 16:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1218591 00:37:14.921 16:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.921 16:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:15.185 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:15.185 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:15.185 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:15.185 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:15.185 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:15.446 null0 00:37:15.446 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:15.446 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:15.446 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:15.704 null1 00:37:15.704 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:15.704 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:15.704 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:15.704 null2 00:37:15.704 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:15.704 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:15.704 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:15.963 null3 00:37:15.963 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:15.963 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:15.963 16:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:16.222 null4 00:37:16.222 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:16.222 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:16.222 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:16.222 null5 00:37:16.222 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:16.222 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:16.222 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:16.481 null6 00:37:16.481 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:16.481 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:16.482 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:16.741 null7 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1223866 1223868 1223871 1223875 1223877 1223880 1223883 1223886 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.741 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:17.000 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:17.000 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.000 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.000 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:17.000 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:17.000 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.000 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.000 16:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.259 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.518 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:17.776 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.776 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.776 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:17.776 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.776 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:17.776 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:17.776 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.776 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.034 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:18.035 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.035 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.035 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:18.035 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.035 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.035 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:18.035 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.035 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.035 16:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:18.035 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.293 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:18.551 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.551 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.551 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:18.551 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:18.551 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.551 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:18.551 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:18.551 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:18.551 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:18.551 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:18.551 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.810 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:19.068 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:19.068 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:19.068 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:19.068 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:19.068 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:19.068 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.068 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:19.068 16:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.326 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:19.327 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.586 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:19.844 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:19.844 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:19.844 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:19.844 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:19.844 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:19.845 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:19.845 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.845 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:20.103 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.103 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.103 16:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:20.103 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.362 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:20.621 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:20.621 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:20.621 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:20.621 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:20.621 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:20.621 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:20.621 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:20.621 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:20.880 rmmod nvme_tcp 00:37:20.880 rmmod nvme_fabrics 00:37:20.880 rmmod nvme_keyring 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1218337 ']' 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1218337 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1218337 ']' 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1218337 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:20.880 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1218337 00:37:21.144 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:21.144 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:21.144 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1218337' 00:37:21.144 killing process with pid 1218337 00:37:21.144 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1218337 00:37:21.144 16:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1218337 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:21.144 16:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:23.680 00:37:23.680 real 0m47.672s 00:37:23.680 user 2m58.790s 00:37:23.680 sys 0m19.572s 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:23.680 ************************************ 00:37:23.680 END TEST nvmf_ns_hotplug_stress 00:37:23.680 ************************************ 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:23.680 ************************************ 00:37:23.680 START TEST nvmf_delete_subsystem 00:37:23.680 ************************************ 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:23.680 * Looking for test storage... 00:37:23.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.680 --rc genhtml_branch_coverage=1 00:37:23.680 --rc genhtml_function_coverage=1 00:37:23.680 --rc genhtml_legend=1 00:37:23.680 --rc geninfo_all_blocks=1 00:37:23.680 --rc geninfo_unexecuted_blocks=1 00:37:23.680 00:37:23.680 ' 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.680 --rc genhtml_branch_coverage=1 00:37:23.680 --rc genhtml_function_coverage=1 00:37:23.680 --rc genhtml_legend=1 00:37:23.680 --rc geninfo_all_blocks=1 00:37:23.680 --rc geninfo_unexecuted_blocks=1 00:37:23.680 00:37:23.680 ' 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.680 --rc genhtml_branch_coverage=1 00:37:23.680 --rc genhtml_function_coverage=1 00:37:23.680 --rc genhtml_legend=1 00:37:23.680 --rc geninfo_all_blocks=1 00:37:23.680 --rc geninfo_unexecuted_blocks=1 00:37:23.680 00:37:23.680 ' 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:23.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.680 --rc genhtml_branch_coverage=1 00:37:23.680 --rc genhtml_function_coverage=1 00:37:23.680 --rc genhtml_legend=1 00:37:23.680 --rc geninfo_all_blocks=1 00:37:23.680 --rc geninfo_unexecuted_blocks=1 00:37:23.680 00:37:23.680 ' 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:23.680 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:23.681 16:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:28.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:28.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:28.955 16:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:28.955 Found net devices under 0000:af:00.0: cvl_0_0 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:28.955 Found net devices under 0000:af:00.1: cvl_0_1 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:28.955 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:28.956 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:29.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:29.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:37:29.216 00:37:29.216 --- 10.0.0.2 ping statistics --- 00:37:29.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.216 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:29.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:29.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:37:29.216 00:37:29.216 --- 10.0.0.1 ping statistics --- 00:37:29.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.216 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1228081 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1228081 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1228081 ']' 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:29.216 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.475 [2024-12-14 16:50:59.338424] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:29.475 [2024-12-14 16:50:59.339427] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:29.475 [2024-12-14 16:50:59.339465] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:29.475 [2024-12-14 16:50:59.404044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:29.475 [2024-12-14 16:50:59.427629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:29.475 [2024-12-14 16:50:59.427664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:29.475 [2024-12-14 16:50:59.427674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:29.475 [2024-12-14 16:50:59.427679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:29.475 [2024-12-14 16:50:59.427685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:29.475 [2024-12-14 16:50:59.428789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.475 [2024-12-14 16:50:59.428796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.475 [2024-12-14 16:50:59.492535] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:29.475 [2024-12-14 16:50:59.492600] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:29.475 [2024-12-14 16:50:59.492743] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:29.475 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:29.475 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:29.475 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:29.475 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:29.475 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.734 [2024-12-14 16:50:59.569446] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.734 [2024-12-14 16:50:59.597827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.734 NULL1 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.734 Delay0 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1228189 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:29.734 16:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:29.734 [2024-12-14 16:50:59.708743] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:31.631 16:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:31.631 16:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.631 16:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:31.889 Write completed with error (sct=0, sc=8) 00:37:31.889 starting I/O failed: -6 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 Write completed with error (sct=0, sc=8) 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 starting I/O failed: -6 00:37:31.889 Write completed with error (sct=0, sc=8) 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 starting I/O failed: -6 00:37:31.889 Write completed with error (sct=0, sc=8) 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 starting I/O failed: -6 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 Write completed with error (sct=0, sc=8) 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 starting I/O failed: -6 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 Write completed with error (sct=0, sc=8) 00:37:31.889 Read completed with error (sct=0, sc=8) 00:37:31.889 Write completed with error (sct=0, sc=8) 00:37:31.889 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 [2024-12-14 16:51:01.918486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1747f70 is same with the state(6) to be set 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 starting I/O failed: -6 00:37:31.890 [2024-12-14 16:51:01.919425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe860000c80 is same with the state(6) to be set 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Read completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:31.890 Write completed with error (sct=0, sc=8) 00:37:32.824 [2024-12-14 16:51:02.887637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746190 is same with the state(6) to be set 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 [2024-12-14 16:51:02.921899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748400 is same with the state(6) to be set 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 [2024-12-14 16:51:02.922433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe86000d800 is same with the state(6) to be set 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 [2024-12-14 16:51:02.922602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe86000d060 is same with the state(6) to be set 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Read completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 Write completed with error (sct=0, sc=8) 00:37:33.083 [2024-12-14 16:51:02.923648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17485e0 is same with the state(6) to be set 00:37:33.083 Initializing NVMe Controllers 00:37:33.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:33.083 Controller IO queue size 128, less than required. 00:37:33.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:33.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:33.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:33.083 Initialization complete. Launching workers. 00:37:33.083 ======================================================== 00:37:33.083 Latency(us) 00:37:33.083 Device Information : IOPS MiB/s Average min max 00:37:33.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.42 0.08 900809.23 320.55 1011521.13 00:37:33.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.95 0.08 908708.25 256.69 1043060.28 00:37:33.083 ======================================================== 00:37:33.083 Total : 331.37 0.16 904717.29 256.69 1043060.28 00:37:33.083 00:37:33.083 [2024-12-14 16:51:02.924067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1746190 (9): Bad file descriptor 00:37:33.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:33.083 16:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.083 16:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:33.083 16:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1228189 00:37:33.083 16:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1228189 00:37:33.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1228189) - No such process 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1228189 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1228189 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1228189 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:33.652 [2024-12-14 16:51:03.453739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1228895 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228895 00:37:33.652 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:33.652 [2024-12-14 16:51:03.536524] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:33.910 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:33.910 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228895 00:37:33.910 16:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:34.474 16:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:34.474 16:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228895 00:37:34.474 16:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:35.038 16:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:35.038 16:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228895 00:37:35.038 16:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:35.600 16:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:35.601 16:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228895 00:37:35.601 16:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:36.165 16:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:36.165 16:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228895 00:37:36.165 16:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:36.422 16:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:36.422 16:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228895 00:37:36.422 16:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:36.987 Initializing NVMe Controllers 00:37:36.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:36.987 Controller IO queue size 128, less than required. 00:37:36.987 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:36.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:36.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:36.987 Initialization complete. Launching workers. 00:37:36.987 ======================================================== 00:37:36.987 Latency(us) 00:37:36.987 Device Information : IOPS MiB/s Average min max 00:37:36.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002284.48 1000142.57 1006467.34 00:37:36.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005503.91 1000309.40 1043231.16 00:37:36.987 ======================================================== 00:37:36.987 Total : 256.00 0.12 1003894.20 1000142.57 1043231.16 00:37:36.987 00:37:36.987 16:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:36.987 16:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228895 00:37:36.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1228895) - No such process 00:37:36.987 16:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1228895 00:37:36.987 16:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:36.987 16:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:36.987 16:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:36.987 16:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:36.987 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:36.987 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:36.987 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:36.987 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:36.987 rmmod nvme_tcp 00:37:36.987 rmmod nvme_fabrics 00:37:36.987 rmmod nvme_keyring 00:37:36.987 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1228081 ']' 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1228081 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1228081 ']' 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1228081 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1228081 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1228081' 00:37:37.246 killing process with pid 1228081 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1228081 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1228081 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:37.246 16:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:39.781 00:37:39.781 real 0m16.090s 00:37:39.781 user 0m26.556s 00:37:39.781 sys 0m5.945s 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:39.781 ************************************ 00:37:39.781 END TEST nvmf_delete_subsystem 00:37:39.781 ************************************ 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:39.781 ************************************ 00:37:39.781 START TEST nvmf_host_management 00:37:39.781 ************************************ 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:39.781 * Looking for test storage... 00:37:39.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:39.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.781 --rc genhtml_branch_coverage=1 00:37:39.781 --rc genhtml_function_coverage=1 00:37:39.781 --rc genhtml_legend=1 00:37:39.781 --rc geninfo_all_blocks=1 00:37:39.781 --rc geninfo_unexecuted_blocks=1 00:37:39.781 00:37:39.781 ' 00:37:39.781 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:39.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.781 --rc genhtml_branch_coverage=1 00:37:39.781 --rc genhtml_function_coverage=1 00:37:39.781 --rc genhtml_legend=1 00:37:39.781 --rc geninfo_all_blocks=1 00:37:39.781 --rc geninfo_unexecuted_blocks=1 00:37:39.781 00:37:39.781 ' 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:39.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.782 --rc genhtml_branch_coverage=1 00:37:39.782 --rc genhtml_function_coverage=1 00:37:39.782 --rc genhtml_legend=1 00:37:39.782 --rc geninfo_all_blocks=1 00:37:39.782 --rc geninfo_unexecuted_blocks=1 00:37:39.782 00:37:39.782 ' 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:39.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.782 --rc genhtml_branch_coverage=1 00:37:39.782 --rc genhtml_function_coverage=1 00:37:39.782 --rc genhtml_legend=1 00:37:39.782 --rc geninfo_all_blocks=1 00:37:39.782 --rc geninfo_unexecuted_blocks=1 00:37:39.782 00:37:39.782 ' 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:39.782 16:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:46.353 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:46.354 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:46.354 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:46.354 Found net devices under 0000:af:00.0: cvl_0_0 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:46.354 Found net devices under 0000:af:00.1: cvl_0_1 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:46.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:46.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:37:46.354 00:37:46.354 --- 10.0.0.2 ping statistics --- 00:37:46.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:46.354 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:46.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:46.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:37:46.354 00:37:46.354 --- 10.0.0.1 ping statistics --- 00:37:46.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:46.354 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1233316 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1233316 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1233316 ']' 00:37:46.354 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.355 [2024-12-14 16:51:15.592344] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:46.355 [2024-12-14 16:51:15.593334] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:46.355 [2024-12-14 16:51:15.593370] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:46.355 [2024-12-14 16:51:15.671974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:46.355 [2024-12-14 16:51:15.695183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:46.355 [2024-12-14 16:51:15.695222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:46.355 [2024-12-14 16:51:15.695229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:46.355 [2024-12-14 16:51:15.695235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:46.355 [2024-12-14 16:51:15.695240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:46.355 [2024-12-14 16:51:15.696672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:46.355 [2024-12-14 16:51:15.696783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:46.355 [2024-12-14 16:51:15.696890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:46.355 [2024-12-14 16:51:15.696892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:37:46.355 [2024-12-14 16:51:15.759436] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:46.355 [2024-12-14 16:51:15.760528] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:46.355 [2024-12-14 16:51:15.760675] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:46.355 [2024-12-14 16:51:15.761027] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:46.355 [2024-12-14 16:51:15.761072] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.355 [2024-12-14 16:51:15.833661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.355 Malloc0 00:37:46.355 [2024-12-14 16:51:15.917894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1233456 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1233456 /var/tmp/bdevperf.sock 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1233456 ']' 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:46.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:46.355 { 00:37:46.355 "params": { 00:37:46.355 "name": "Nvme$subsystem", 00:37:46.355 "trtype": "$TEST_TRANSPORT", 00:37:46.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:46.355 "adrfam": "ipv4", 00:37:46.355 "trsvcid": "$NVMF_PORT", 00:37:46.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:46.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:46.355 "hdgst": ${hdgst:-false}, 00:37:46.355 "ddgst": ${ddgst:-false} 00:37:46.355 }, 00:37:46.355 "method": "bdev_nvme_attach_controller" 00:37:46.355 } 00:37:46.355 EOF 00:37:46.355 )") 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:46.355 16:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:46.355 "params": { 00:37:46.355 "name": "Nvme0", 00:37:46.355 "trtype": "tcp", 00:37:46.355 "traddr": "10.0.0.2", 00:37:46.355 "adrfam": "ipv4", 00:37:46.355 "trsvcid": "4420", 00:37:46.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:46.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:46.355 "hdgst": false, 00:37:46.355 "ddgst": false 00:37:46.355 }, 00:37:46.355 "method": "bdev_nvme_attach_controller" 00:37:46.355 }' 00:37:46.355 [2024-12-14 16:51:16.012638] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:46.355 [2024-12-14 16:51:16.012685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233456 ] 00:37:46.355 [2024-12-14 16:51:16.087672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:46.355 [2024-12-14 16:51:16.109780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.355 Running I/O for 10 seconds... 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:46.355 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=101 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 101 -ge 100 ']' 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.356 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.356 [2024-12-14 16:51:16.405350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.405595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c93240 is same with the state(6) to be set 00:37:46.356 [2024-12-14 16:51:16.408900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.408931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.408945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.408953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.408965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.408972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.408980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.408986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.408994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.356 [2024-12-14 16:51:16.409195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.356 [2024-12-14 16:51:16.409202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.357 [2024-12-14 16:51:16.409770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.357 [2024-12-14 16:51:16.409778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.358 [2024-12-14 16:51:16.409784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.358 [2024-12-14 16:51:16.409792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.358 [2024-12-14 16:51:16.409798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.358 [2024-12-14 16:51:16.409806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.358 [2024-12-14 16:51:16.409813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.358 [2024-12-14 16:51:16.409820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.358 [2024-12-14 16:51:16.409826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.358 [2024-12-14 16:51:16.409834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.358 [2024-12-14 16:51:16.409840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.358 [2024-12-14 16:51:16.409847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.358 [2024-12-14 16:51:16.409854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.358 [2024-12-14 16:51:16.409879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.358 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.358 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:46.358 [2024-12-14 16:51:16.410787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:46.358 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.358 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:46.358 task offset: 24576 on job bdev=Nvme0n1 fails 00:37:46.358 00:37:46.358 Latency(us) 00:37:46.358 [2024-12-14T15:51:16.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.358 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.358 Job: Nvme0n1 ended in about 0.11 seconds with error 00:37:46.358 Verification LBA range: start 0x0 length 0x400 00:37:46.358 Nvme0n1 : 0.11 1765.60 110.35 588.53 0.00 25060.94 1529.17 26963.38 00:37:46.358 [2024-12-14T15:51:16.444Z] =================================================================================================================== 00:37:46.358 [2024-12-14T15:51:16.444Z] Total : 1765.60 110.35 588.53 0.00 25060.94 1529.17 26963.38 00:37:46.358 [2024-12-14 16:51:16.413228] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:46.358 [2024-12-14 16:51:16.413246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61490 (9): Bad file descriptor 00:37:46.358 [2024-12-14 16:51:16.414154] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:46.358 [2024-12-14 16:51:16.414220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:46.358 [2024-12-14 16:51:16.414242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:46.358 [2024-12-14 16:51:16.414255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:46.358 [2024-12-14 16:51:16.414262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:46.358 [2024-12-14 16:51:16.414269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.358 [2024-12-14 16:51:16.414275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d61490 00:37:46.358 [2024-12-14 16:51:16.414293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61490 (9): Bad file descriptor 00:37:46.358 [2024-12-14 16:51:16.414304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:46.358 [2024-12-14 16:51:16.414310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:46.358 [2024-12-14 16:51:16.414318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:46.358 [2024-12-14 16:51:16.414327] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:46.358 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.358 16:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1233456 00:37:47.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1233456) - No such process 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:47.727 { 00:37:47.727 "params": { 00:37:47.727 "name": "Nvme$subsystem", 00:37:47.727 "trtype": "$TEST_TRANSPORT", 00:37:47.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:47.727 "adrfam": "ipv4", 00:37:47.727 "trsvcid": "$NVMF_PORT", 00:37:47.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:47.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:47.727 "hdgst": ${hdgst:-false}, 00:37:47.727 "ddgst": ${ddgst:-false} 00:37:47.727 }, 00:37:47.727 "method": "bdev_nvme_attach_controller" 00:37:47.727 } 00:37:47.727 EOF 00:37:47.727 )") 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:47.727 16:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:47.727 "params": { 00:37:47.727 "name": "Nvme0", 00:37:47.727 "trtype": "tcp", 00:37:47.727 "traddr": "10.0.0.2", 00:37:47.727 "adrfam": "ipv4", 00:37:47.727 "trsvcid": "4420", 00:37:47.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:47.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:47.728 "hdgst": false, 00:37:47.728 "ddgst": false 00:37:47.728 }, 00:37:47.728 "method": "bdev_nvme_attach_controller" 00:37:47.728 }' 00:37:47.728 [2024-12-14 16:51:17.475287] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:47.728 [2024-12-14 16:51:17.475334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233700 ] 00:37:47.728 [2024-12-14 16:51:17.550940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.728 [2024-12-14 16:51:17.571485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.985 Running I/O for 1 seconds... 00:37:48.916 2006.00 IOPS, 125.38 MiB/s 00:37:48.916 Latency(us) 00:37:48.916 [2024-12-14T15:51:19.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.916 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:48.916 Verification LBA range: start 0x0 length 0x400 00:37:48.916 Nvme0n1 : 1.05 1965.71 122.86 0.00 0.00 30773.98 3105.16 44439.65 00:37:48.916 [2024-12-14T15:51:19.002Z] =================================================================================================================== 00:37:48.916 [2024-12-14T15:51:19.002Z] Total : 1965.71 122.86 0.00 0.00 30773.98 3105.16 44439.65 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:49.175 rmmod nvme_tcp 00:37:49.175 rmmod nvme_fabrics 00:37:49.175 rmmod nvme_keyring 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1233316 ']' 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1233316 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1233316 ']' 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1233316 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1233316 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1233316' 00:37:49.175 killing process with pid 1233316 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1233316 00:37:49.175 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1233316 00:37:49.434 [2024-12-14 16:51:19.328172] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.434 16:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.339 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.597 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:51.597 00:37:51.597 real 0m11.987s 00:37:51.597 user 0m16.565s 00:37:51.597 sys 0m6.035s 00:37:51.597 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.597 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:51.597 ************************************ 00:37:51.597 END TEST nvmf_host_management 00:37:51.597 ************************************ 00:37:51.597 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:51.597 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:51.597 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.597 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:51.597 ************************************ 00:37:51.597 START TEST nvmf_lvol 00:37:51.597 ************************************ 00:37:51.597 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:51.597 * Looking for test storage... 00:37:51.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.597 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:51.597 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.598 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:51.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.857 --rc genhtml_branch_coverage=1 00:37:51.857 --rc genhtml_function_coverage=1 00:37:51.857 --rc genhtml_legend=1 00:37:51.857 --rc geninfo_all_blocks=1 00:37:51.857 --rc geninfo_unexecuted_blocks=1 00:37:51.857 00:37:51.857 ' 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:51.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.857 --rc genhtml_branch_coverage=1 00:37:51.857 --rc genhtml_function_coverage=1 00:37:51.857 --rc genhtml_legend=1 00:37:51.857 --rc geninfo_all_blocks=1 00:37:51.857 --rc geninfo_unexecuted_blocks=1 00:37:51.857 00:37:51.857 ' 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:51.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.857 --rc genhtml_branch_coverage=1 00:37:51.857 --rc genhtml_function_coverage=1 00:37:51.857 --rc genhtml_legend=1 00:37:51.857 --rc geninfo_all_blocks=1 00:37:51.857 --rc geninfo_unexecuted_blocks=1 00:37:51.857 00:37:51.857 ' 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:51.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.857 --rc genhtml_branch_coverage=1 00:37:51.857 --rc genhtml_function_coverage=1 00:37:51.857 --rc genhtml_legend=1 00:37:51.857 --rc geninfo_all_blocks=1 00:37:51.857 --rc geninfo_unexecuted_blocks=1 00:37:51.857 00:37:51.857 ' 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:51.857 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:51.858 16:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:57.244 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:57.244 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:57.244 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:57.244 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:57.244 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:57.244 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:57.244 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:57.245 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:57.245 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:57.245 Found net devices under 0000:af:00.0: cvl_0_0 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:57.245 Found net devices under 0000:af:00.1: cvl_0_1 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:57.245 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:57.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:57.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:37:57.504 00:37:57.504 --- 10.0.0.2 ping statistics --- 00:37:57.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.504 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:57.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:57.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:37:57.504 00:37:57.504 --- 10.0.0.1 ping statistics --- 00:37:57.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.504 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:57.504 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1237393 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1237393 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1237393 ']' 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:57.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:57.763 [2024-12-14 16:51:27.645364] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:57.763 [2024-12-14 16:51:27.646274] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:57.763 [2024-12-14 16:51:27.646305] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:57.763 [2024-12-14 16:51:27.724805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:57.763 [2024-12-14 16:51:27.747331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:57.763 [2024-12-14 16:51:27.747367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:57.763 [2024-12-14 16:51:27.747373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:57.763 [2024-12-14 16:51:27.747379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:57.763 [2024-12-14 16:51:27.747384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:57.763 [2024-12-14 16:51:27.748614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:57.763 [2024-12-14 16:51:27.748725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.763 [2024-12-14 16:51:27.748727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:57.763 [2024-12-14 16:51:27.810929] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:57.763 [2024-12-14 16:51:27.811767] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:57.763 [2024-12-14 16:51:27.811894] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:57.763 [2024-12-14 16:51:27.812080] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:57.763 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:58.021 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:58.021 16:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:58.021 [2024-12-14 16:51:28.041382] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:58.021 16:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:58.279 16:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:58.279 16:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:58.538 16:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:58.538 16:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:58.797 16:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:59.056 16:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c0857f33-6daa-4200-aaaa-9e55c6747377 00:37:59.056 16:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c0857f33-6daa-4200-aaaa-9e55c6747377 lvol 20 00:37:59.056 16:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8fb184bb-559b-4fce-b3f3-30fad0e1cb91 00:37:59.056 16:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:59.314 16:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8fb184bb-559b-4fce-b3f3-30fad0e1cb91 00:37:59.573 16:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:59.573 [2024-12-14 16:51:29.657261] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:59.831 16:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:59.831 16:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1237706 00:37:59.831 16:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:59.831 16:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:01.204 16:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8fb184bb-559b-4fce-b3f3-30fad0e1cb91 MY_SNAPSHOT 00:38:01.204 16:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a749cc2d-18aa-4993-a0b9-6bcec7db6574 00:38:01.204 16:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8fb184bb-559b-4fce-b3f3-30fad0e1cb91 30 00:38:01.462 16:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a749cc2d-18aa-4993-a0b9-6bcec7db6574 MY_CLONE 00:38:01.720 16:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4186bdea-f9b3-49ef-81b8-ad38fc94a497 00:38:01.720 16:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4186bdea-f9b3-49ef-81b8-ad38fc94a497 00:38:02.285 16:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1237706 00:38:10.394 Initializing NVMe Controllers 00:38:10.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:10.394 Controller IO queue size 128, less than required. 00:38:10.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:10.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:10.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:10.394 Initialization complete. Launching workers. 00:38:10.394 ======================================================== 00:38:10.394 Latency(us) 00:38:10.394 Device Information : IOPS MiB/s Average min max 00:38:10.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12537.90 48.98 10213.19 3850.42 59158.34 00:38:10.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12667.40 49.48 10105.25 3463.17 57129.04 00:38:10.394 ======================================================== 00:38:10.394 Total : 25205.29 98.46 10158.94 3463.17 59158.34 00:38:10.394 00:38:10.394 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:10.394 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8fb184bb-559b-4fce-b3f3-30fad0e1cb91 00:38:10.653 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c0857f33-6daa-4200-aaaa-9e55c6747377 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:10.912 rmmod nvme_tcp 00:38:10.912 rmmod nvme_fabrics 00:38:10.912 rmmod nvme_keyring 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1237393 ']' 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1237393 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1237393 ']' 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1237393 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1237393 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1237393' 00:38:10.912 killing process with pid 1237393 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1237393 00:38:10.912 16:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1237393 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:11.172 16:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:13.077 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:13.336 00:38:13.336 real 0m21.658s 00:38:13.336 user 0m55.457s 00:38:13.336 sys 0m9.510s 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:13.336 ************************************ 00:38:13.336 END TEST nvmf_lvol 00:38:13.336 ************************************ 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:13.336 ************************************ 00:38:13.336 START TEST nvmf_lvs_grow 00:38:13.336 ************************************ 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:13.336 * Looking for test storage... 00:38:13.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:13.336 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.337 --rc genhtml_branch_coverage=1 00:38:13.337 --rc genhtml_function_coverage=1 00:38:13.337 --rc genhtml_legend=1 00:38:13.337 --rc geninfo_all_blocks=1 00:38:13.337 --rc geninfo_unexecuted_blocks=1 00:38:13.337 00:38:13.337 ' 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.337 --rc genhtml_branch_coverage=1 00:38:13.337 --rc genhtml_function_coverage=1 00:38:13.337 --rc genhtml_legend=1 00:38:13.337 --rc geninfo_all_blocks=1 00:38:13.337 --rc geninfo_unexecuted_blocks=1 00:38:13.337 00:38:13.337 ' 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.337 --rc genhtml_branch_coverage=1 00:38:13.337 --rc genhtml_function_coverage=1 00:38:13.337 --rc genhtml_legend=1 00:38:13.337 --rc geninfo_all_blocks=1 00:38:13.337 --rc geninfo_unexecuted_blocks=1 00:38:13.337 00:38:13.337 ' 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.337 --rc genhtml_branch_coverage=1 00:38:13.337 --rc genhtml_function_coverage=1 00:38:13.337 --rc genhtml_legend=1 00:38:13.337 --rc geninfo_all_blocks=1 00:38:13.337 --rc geninfo_unexecuted_blocks=1 00:38:13.337 00:38:13.337 ' 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:13.337 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.596 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:13.597 16:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:20.166 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:20.166 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:20.166 Found net devices under 0000:af:00.0: cvl_0_0 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:20.166 Found net devices under 0000:af:00.1: cvl_0_1 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:20.166 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:20.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:20.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:38:20.167 00:38:20.167 --- 10.0.0.2 ping statistics --- 00:38:20.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.167 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:20.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:20.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:38:20.167 00:38:20.167 --- 10.0.0.1 ping statistics --- 00:38:20.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.167 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1242900 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1242900 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1242900 ']' 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:20.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:20.167 [2024-12-14 16:51:49.358774] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:20.167 [2024-12-14 16:51:49.359692] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:20.167 [2024-12-14 16:51:49.359725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:20.167 [2024-12-14 16:51:49.437161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.167 [2024-12-14 16:51:49.458221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:20.167 [2024-12-14 16:51:49.458257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:20.167 [2024-12-14 16:51:49.458264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:20.167 [2024-12-14 16:51:49.458270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:20.167 [2024-12-14 16:51:49.458275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:20.167 [2024-12-14 16:51:49.458776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.167 [2024-12-14 16:51:49.520384] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:20.167 [2024-12-14 16:51:49.520614] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:20.167 [2024-12-14 16:51:49.755410] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:20.167 ************************************ 00:38:20.167 START TEST lvs_grow_clean 00:38:20.167 ************************************ 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:20.167 16:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:20.167 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:20.167 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:20.167 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=aca40274-3558-463e-b291-3b71b944dacd 00:38:20.167 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aca40274-3558-463e-b291-3b71b944dacd 00:38:20.167 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:20.426 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:20.426 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:20.426 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aca40274-3558-463e-b291-3b71b944dacd lvol 150 00:38:20.685 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=11a891de-6fc4-4efa-a187-00f7b6cba7ca 00:38:20.685 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:20.685 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:20.944 [2024-12-14 16:51:50.787154] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:20.944 [2024-12-14 16:51:50.787278] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:20.944 true 00:38:20.944 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aca40274-3558-463e-b291-3b71b944dacd 00:38:20.944 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:20.944 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:20.944 16:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:21.203 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 11a891de-6fc4-4efa-a187-00f7b6cba7ca 00:38:21.462 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:21.462 [2024-12-14 16:51:51.507637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:21.462 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:21.721 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1243307 00:38:21.721 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:21.721 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:21.721 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1243307 /var/tmp/bdevperf.sock 00:38:21.721 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1243307 ']' 00:38:21.721 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:21.721 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:21.721 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:21.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:21.721 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:21.721 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:21.721 [2024-12-14 16:51:51.761079] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:21.721 [2024-12-14 16:51:51.761129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243307 ] 00:38:21.980 [2024-12-14 16:51:51.836811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.980 [2024-12-14 16:51:51.859179] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:21.980 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:21.980 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:21.980 16:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:22.548 Nvme0n1 00:38:22.548 16:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:22.548 [ 00:38:22.548 { 00:38:22.548 "name": "Nvme0n1", 00:38:22.548 "aliases": [ 00:38:22.548 "11a891de-6fc4-4efa-a187-00f7b6cba7ca" 00:38:22.548 ], 00:38:22.548 "product_name": "NVMe disk", 00:38:22.548 "block_size": 4096, 00:38:22.548 "num_blocks": 38912, 00:38:22.548 "uuid": "11a891de-6fc4-4efa-a187-00f7b6cba7ca", 00:38:22.548 "numa_id": 1, 00:38:22.548 "assigned_rate_limits": { 00:38:22.548 "rw_ios_per_sec": 0, 00:38:22.548 "rw_mbytes_per_sec": 0, 00:38:22.548 "r_mbytes_per_sec": 0, 00:38:22.548 "w_mbytes_per_sec": 0 00:38:22.548 }, 00:38:22.548 "claimed": false, 00:38:22.548 "zoned": false, 00:38:22.548 "supported_io_types": { 00:38:22.548 "read": true, 00:38:22.548 "write": true, 00:38:22.548 "unmap": true, 00:38:22.548 "flush": true, 00:38:22.548 "reset": true, 00:38:22.548 "nvme_admin": true, 00:38:22.548 "nvme_io": true, 00:38:22.548 "nvme_io_md": false, 00:38:22.548 "write_zeroes": true, 00:38:22.548 "zcopy": false, 00:38:22.548 "get_zone_info": false, 00:38:22.548 "zone_management": false, 00:38:22.548 "zone_append": false, 00:38:22.548 "compare": true, 00:38:22.548 "compare_and_write": true, 00:38:22.548 "abort": true, 00:38:22.548 "seek_hole": false, 00:38:22.548 "seek_data": false, 00:38:22.548 "copy": true, 00:38:22.548 "nvme_iov_md": false 00:38:22.548 }, 00:38:22.548 "memory_domains": [ 00:38:22.548 { 00:38:22.548 "dma_device_id": "system", 00:38:22.548 "dma_device_type": 1 00:38:22.548 } 00:38:22.548 ], 00:38:22.548 "driver_specific": { 00:38:22.548 "nvme": [ 00:38:22.548 { 00:38:22.548 "trid": { 00:38:22.548 "trtype": "TCP", 00:38:22.548 "adrfam": "IPv4", 00:38:22.548 "traddr": "10.0.0.2", 00:38:22.548 "trsvcid": "4420", 00:38:22.548 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:22.548 }, 00:38:22.548 "ctrlr_data": { 00:38:22.548 "cntlid": 1, 00:38:22.548 "vendor_id": "0x8086", 00:38:22.548 "model_number": "SPDK bdev Controller", 00:38:22.548 "serial_number": "SPDK0", 00:38:22.548 "firmware_revision": "25.01", 00:38:22.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:22.548 "oacs": { 00:38:22.548 "security": 0, 00:38:22.548 "format": 0, 00:38:22.548 "firmware": 0, 00:38:22.548 "ns_manage": 0 00:38:22.548 }, 00:38:22.548 "multi_ctrlr": true, 00:38:22.548 "ana_reporting": false 00:38:22.548 }, 00:38:22.548 "vs": { 00:38:22.548 "nvme_version": "1.3" 00:38:22.548 }, 00:38:22.548 "ns_data": { 00:38:22.548 "id": 1, 00:38:22.548 "can_share": true 00:38:22.548 } 00:38:22.548 } 00:38:22.548 ], 00:38:22.548 "mp_policy": "active_passive" 00:38:22.548 } 00:38:22.548 } 00:38:22.548 ] 00:38:22.548 16:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1243393 00:38:22.548 16:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:22.548 16:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:22.548 Running I/O for 10 seconds... 00:38:23.925 Latency(us) 00:38:23.925 [2024-12-14T15:51:54.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.926 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:38:23.926 [2024-12-14T15:51:54.012Z] =================================================================================================================== 00:38:23.926 [2024-12-14T15:51:54.012Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:38:23.926 00:38:24.493 16:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aca40274-3558-463e-b291-3b71b944dacd 00:38:24.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:24.752 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:24.752 [2024-12-14T15:51:54.838Z] =================================================================================================================== 00:38:24.752 [2024-12-14T15:51:54.838Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:24.752 00:38:24.752 true 00:38:24.752 16:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aca40274-3558-463e-b291-3b71b944dacd 00:38:24.752 16:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:25.010 16:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:25.010 16:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:25.010 16:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1243393 00:38:25.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:25.577 Nvme0n1 : 3.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:38:25.577 [2024-12-14T15:51:55.663Z] =================================================================================================================== 00:38:25.577 [2024-12-14T15:51:55.663Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:38:25.577 00:38:26.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:26.954 Nvme0n1 : 4.00 23336.25 91.16 0.00 0.00 0.00 0.00 0.00 00:38:26.954 [2024-12-14T15:51:57.040Z] =================================================================================================================== 00:38:26.954 [2024-12-14T15:51:57.040Z] Total : 23336.25 91.16 0.00 0.00 0.00 0.00 0.00 00:38:26.954 00:38:27.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:27.889 Nvme0n1 : 5.00 23418.80 91.48 0.00 0.00 0.00 0.00 0.00 00:38:27.889 [2024-12-14T15:51:57.975Z] =================================================================================================================== 00:38:27.889 [2024-12-14T15:51:57.975Z] Total : 23418.80 91.48 0.00 0.00 0.00 0.00 0.00 00:38:27.889 00:38:28.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.826 Nvme0n1 : 6.00 23473.83 91.69 0.00 0.00 0.00 0.00 0.00 00:38:28.826 [2024-12-14T15:51:58.912Z] =================================================================================================================== 00:38:28.826 [2024-12-14T15:51:58.912Z] Total : 23473.83 91.69 0.00 0.00 0.00 0.00 0.00 00:38:28.826 00:38:29.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.762 Nvme0n1 : 7.00 23513.14 91.85 0.00 0.00 0.00 0.00 0.00 00:38:29.762 [2024-12-14T15:51:59.848Z] =================================================================================================================== 00:38:29.762 [2024-12-14T15:51:59.848Z] Total : 23513.14 91.85 0.00 0.00 0.00 0.00 0.00 00:38:29.762 00:38:30.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:30.698 Nvme0n1 : 8.00 23479.12 91.72 0.00 0.00 0.00 0.00 0.00 00:38:30.698 [2024-12-14T15:52:00.784Z] =================================================================================================================== 00:38:30.698 [2024-12-14T15:52:00.784Z] Total : 23479.12 91.72 0.00 0.00 0.00 0.00 0.00 00:38:30.698 00:38:31.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:31.632 Nvme0n1 : 9.00 23509.11 91.83 0.00 0.00 0.00 0.00 0.00 00:38:31.632 [2024-12-14T15:52:01.718Z] =================================================================================================================== 00:38:31.632 [2024-12-14T15:52:01.718Z] Total : 23509.11 91.83 0.00 0.00 0.00 0.00 0.00 00:38:31.632 00:38:33.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:33.008 Nvme0n1 : 10.00 23533.10 91.93 0.00 0.00 0.00 0.00 0.00 00:38:33.008 [2024-12-14T15:52:03.094Z] =================================================================================================================== 00:38:33.008 [2024-12-14T15:52:03.094Z] Total : 23533.10 91.93 0.00 0.00 0.00 0.00 0.00 00:38:33.008 00:38:33.008 00:38:33.008 Latency(us) 00:38:33.008 [2024-12-14T15:52:03.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:33.008 Nvme0n1 : 10.00 23530.27 91.92 0.00 0.00 5436.18 3354.82 27587.54 00:38:33.008 [2024-12-14T15:52:03.094Z] =================================================================================================================== 00:38:33.008 [2024-12-14T15:52:03.094Z] Total : 23530.27 91.92 0.00 0.00 5436.18 3354.82 27587.54 00:38:33.008 { 00:38:33.008 "results": [ 00:38:33.008 { 00:38:33.008 "job": "Nvme0n1", 00:38:33.008 "core_mask": "0x2", 00:38:33.008 "workload": "randwrite", 00:38:33.008 "status": "finished", 00:38:33.008 "queue_depth": 128, 00:38:33.008 "io_size": 4096, 00:38:33.008 "runtime": 10.003967, 00:38:33.008 "iops": 23530.265543658832, 00:38:33.008 "mibps": 91.91509977991731, 00:38:33.008 "io_failed": 0, 00:38:33.008 "io_timeout": 0, 00:38:33.008 "avg_latency_us": 5436.182335776228, 00:38:33.008 "min_latency_us": 3354.8190476190475, 00:38:33.008 "max_latency_us": 27587.53523809524 00:38:33.008 } 00:38:33.008 ], 00:38:33.008 "core_count": 1 00:38:33.008 } 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1243307 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1243307 ']' 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1243307 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1243307 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1243307' 00:38:33.008 killing process with pid 1243307 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1243307 00:38:33.008 Received shutdown signal, test time was about 10.000000 seconds 00:38:33.008 00:38:33.008 Latency(us) 00:38:33.008 [2024-12-14T15:52:03.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.008 [2024-12-14T15:52:03.094Z] =================================================================================================================== 00:38:33.008 [2024-12-14T15:52:03.094Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1243307 00:38:33.008 16:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:33.008 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:33.266 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aca40274-3558-463e-b291-3b71b944dacd 00:38:33.266 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:33.525 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:33.525 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:33.525 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:33.784 [2024-12-14 16:52:03.655223] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aca40274-3558-463e-b291-3b71b944dacd 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aca40274-3558-463e-b291-3b71b944dacd 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:33.784 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aca40274-3558-463e-b291-3b71b944dacd 00:38:34.043 request: 00:38:34.043 { 00:38:34.043 "uuid": "aca40274-3558-463e-b291-3b71b944dacd", 00:38:34.043 "method": "bdev_lvol_get_lvstores", 00:38:34.043 "req_id": 1 00:38:34.043 } 00:38:34.043 Got JSON-RPC error response 00:38:34.043 response: 00:38:34.043 { 00:38:34.043 "code": -19, 00:38:34.043 "message": "No such device" 00:38:34.043 } 00:38:34.043 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:34.043 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:34.043 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:34.043 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:34.043 16:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:34.043 aio_bdev 00:38:34.043 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 11a891de-6fc4-4efa-a187-00f7b6cba7ca 00:38:34.043 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=11a891de-6fc4-4efa-a187-00f7b6cba7ca 00:38:34.043 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:34.043 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:34.043 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:34.043 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:34.043 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:34.301 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 11a891de-6fc4-4efa-a187-00f7b6cba7ca -t 2000 00:38:34.560 [ 00:38:34.560 { 00:38:34.560 "name": "11a891de-6fc4-4efa-a187-00f7b6cba7ca", 00:38:34.560 "aliases": [ 00:38:34.560 "lvs/lvol" 00:38:34.560 ], 00:38:34.560 "product_name": "Logical Volume", 00:38:34.560 "block_size": 4096, 00:38:34.560 "num_blocks": 38912, 00:38:34.560 "uuid": "11a891de-6fc4-4efa-a187-00f7b6cba7ca", 00:38:34.560 "assigned_rate_limits": { 00:38:34.560 "rw_ios_per_sec": 0, 00:38:34.560 "rw_mbytes_per_sec": 0, 00:38:34.560 "r_mbytes_per_sec": 0, 00:38:34.560 "w_mbytes_per_sec": 0 00:38:34.560 }, 00:38:34.560 "claimed": false, 00:38:34.560 "zoned": false, 00:38:34.560 "supported_io_types": { 00:38:34.560 "read": true, 00:38:34.560 "write": true, 00:38:34.560 "unmap": true, 00:38:34.560 "flush": false, 00:38:34.560 "reset": true, 00:38:34.560 "nvme_admin": false, 00:38:34.560 "nvme_io": false, 00:38:34.560 "nvme_io_md": false, 00:38:34.560 "write_zeroes": true, 00:38:34.560 "zcopy": false, 00:38:34.560 "get_zone_info": false, 00:38:34.560 "zone_management": false, 00:38:34.560 "zone_append": false, 00:38:34.560 "compare": false, 00:38:34.560 "compare_and_write": false, 00:38:34.560 "abort": false, 00:38:34.560 "seek_hole": true, 00:38:34.560 "seek_data": true, 00:38:34.560 "copy": false, 00:38:34.560 "nvme_iov_md": false 00:38:34.560 }, 00:38:34.560 "driver_specific": { 00:38:34.560 "lvol": { 00:38:34.560 "lvol_store_uuid": "aca40274-3558-463e-b291-3b71b944dacd", 00:38:34.560 "base_bdev": "aio_bdev", 00:38:34.560 "thin_provision": false, 00:38:34.560 "num_allocated_clusters": 38, 00:38:34.560 "snapshot": false, 00:38:34.560 "clone": false, 00:38:34.560 "esnap_clone": false 00:38:34.560 } 00:38:34.560 } 00:38:34.560 } 00:38:34.560 ] 00:38:34.560 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:34.560 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aca40274-3558-463e-b291-3b71b944dacd 00:38:34.560 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:34.819 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:34.819 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aca40274-3558-463e-b291-3b71b944dacd 00:38:34.819 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:34.819 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:34.819 16:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 11a891de-6fc4-4efa-a187-00f7b6cba7ca 00:38:35.078 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aca40274-3558-463e-b291-3b71b944dacd 00:38:35.336 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:35.594 00:38:35.594 real 0m15.712s 00:38:35.594 user 0m15.207s 00:38:35.594 sys 0m1.474s 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:35.594 ************************************ 00:38:35.594 END TEST lvs_grow_clean 00:38:35.594 ************************************ 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:35.594 ************************************ 00:38:35.594 START TEST lvs_grow_dirty 00:38:35.594 ************************************ 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:35.594 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:35.854 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:35.854 16:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:36.112 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:36.112 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:36.112 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:36.371 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:36.371 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:36.371 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb lvol 150 00:38:36.371 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=732d5424-c76d-4386-9140-906e80697f16 00:38:36.371 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:36.371 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:36.629 [2024-12-14 16:52:06.587163] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:36.629 [2024-12-14 16:52:06.587292] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:36.629 true 00:38:36.629 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:36.629 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:36.888 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:36.888 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:37.147 16:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 732d5424-c76d-4386-9140-906e80697f16 00:38:37.147 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:37.406 [2024-12-14 16:52:07.331597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:37.406 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:37.664 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1245838 00:38:37.664 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:37.664 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:37.664 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1245838 /var/tmp/bdevperf.sock 00:38:37.665 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1245838 ']' 00:38:37.665 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:37.665 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:37.665 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:37.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:37.665 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:37.665 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:37.665 [2024-12-14 16:52:07.583672] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:37.665 [2024-12-14 16:52:07.583720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245838 ] 00:38:37.665 [2024-12-14 16:52:07.659021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.665 [2024-12-14 16:52:07.681459] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:37.923 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:37.923 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:37.923 16:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:38.181 Nvme0n1 00:38:38.181 16:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:38.440 [ 00:38:38.440 { 00:38:38.440 "name": "Nvme0n1", 00:38:38.440 "aliases": [ 00:38:38.440 "732d5424-c76d-4386-9140-906e80697f16" 00:38:38.440 ], 00:38:38.440 "product_name": "NVMe disk", 00:38:38.440 "block_size": 4096, 00:38:38.440 "num_blocks": 38912, 00:38:38.440 "uuid": "732d5424-c76d-4386-9140-906e80697f16", 00:38:38.440 "numa_id": 1, 00:38:38.440 "assigned_rate_limits": { 00:38:38.440 "rw_ios_per_sec": 0, 00:38:38.440 "rw_mbytes_per_sec": 0, 00:38:38.440 "r_mbytes_per_sec": 0, 00:38:38.440 "w_mbytes_per_sec": 0 00:38:38.440 }, 00:38:38.440 "claimed": false, 00:38:38.440 "zoned": false, 00:38:38.440 "supported_io_types": { 00:38:38.440 "read": true, 00:38:38.440 "write": true, 00:38:38.440 "unmap": true, 00:38:38.440 "flush": true, 00:38:38.440 "reset": true, 00:38:38.440 "nvme_admin": true, 00:38:38.440 "nvme_io": true, 00:38:38.440 "nvme_io_md": false, 00:38:38.440 "write_zeroes": true, 00:38:38.440 "zcopy": false, 00:38:38.440 "get_zone_info": false, 00:38:38.440 "zone_management": false, 00:38:38.440 "zone_append": false, 00:38:38.440 "compare": true, 00:38:38.440 "compare_and_write": true, 00:38:38.440 "abort": true, 00:38:38.440 "seek_hole": false, 00:38:38.440 "seek_data": false, 00:38:38.440 "copy": true, 00:38:38.440 "nvme_iov_md": false 00:38:38.440 }, 00:38:38.440 "memory_domains": [ 00:38:38.440 { 00:38:38.440 "dma_device_id": "system", 00:38:38.440 "dma_device_type": 1 00:38:38.440 } 00:38:38.440 ], 00:38:38.440 "driver_specific": { 00:38:38.440 "nvme": [ 00:38:38.440 { 00:38:38.440 "trid": { 00:38:38.440 "trtype": "TCP", 00:38:38.440 "adrfam": "IPv4", 00:38:38.440 "traddr": "10.0.0.2", 00:38:38.440 "trsvcid": "4420", 00:38:38.440 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:38.440 }, 00:38:38.440 "ctrlr_data": { 00:38:38.440 "cntlid": 1, 00:38:38.440 "vendor_id": "0x8086", 00:38:38.440 "model_number": "SPDK bdev Controller", 00:38:38.440 "serial_number": "SPDK0", 00:38:38.440 "firmware_revision": "25.01", 00:38:38.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:38.440 "oacs": { 00:38:38.441 "security": 0, 00:38:38.441 "format": 0, 00:38:38.441 "firmware": 0, 00:38:38.441 "ns_manage": 0 00:38:38.441 }, 00:38:38.441 "multi_ctrlr": true, 00:38:38.441 "ana_reporting": false 00:38:38.441 }, 00:38:38.441 "vs": { 00:38:38.441 "nvme_version": "1.3" 00:38:38.441 }, 00:38:38.441 "ns_data": { 00:38:38.441 "id": 1, 00:38:38.441 "can_share": true 00:38:38.441 } 00:38:38.441 } 00:38:38.441 ], 00:38:38.441 "mp_policy": "active_passive" 00:38:38.441 } 00:38:38.441 } 00:38:38.441 ] 00:38:38.441 16:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1245903 00:38:38.441 16:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:38.441 16:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:38.441 Running I/O for 10 seconds... 00:38:39.377 Latency(us) 00:38:39.377 [2024-12-14T15:52:09.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:39.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:39.377 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:38:39.377 [2024-12-14T15:52:09.463Z] =================================================================================================================== 00:38:39.377 [2024-12-14T15:52:09.463Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:38:39.377 00:38:40.312 16:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:40.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:40.571 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:40.571 [2024-12-14T15:52:10.657Z] =================================================================================================================== 00:38:40.571 [2024-12-14T15:52:10.657Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:40.571 00:38:40.571 true 00:38:40.571 16:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:40.571 16:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:40.829 16:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:40.829 16:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:40.829 16:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1245903 00:38:41.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:41.395 Nvme0n1 : 3.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:38:41.395 [2024-12-14T15:52:11.481Z] =================================================================================================================== 00:38:41.395 [2024-12-14T15:52:11.481Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:38:41.395 00:38:42.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:42.771 Nvme0n1 : 4.00 23336.25 91.16 0.00 0.00 0.00 0.00 0.00 00:38:42.771 [2024-12-14T15:52:12.857Z] =================================================================================================================== 00:38:42.771 [2024-12-14T15:52:12.857Z] Total : 23336.25 91.16 0.00 0.00 0.00 0.00 0.00 00:38:42.771 00:38:43.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:43.704 Nvme0n1 : 5.00 23418.80 91.48 0.00 0.00 0.00 0.00 0.00 00:38:43.704 [2024-12-14T15:52:13.790Z] =================================================================================================================== 00:38:43.704 [2024-12-14T15:52:13.790Z] Total : 23418.80 91.48 0.00 0.00 0.00 0.00 0.00 00:38:43.704 00:38:44.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:44.639 Nvme0n1 : 6.00 23458.33 91.63 0.00 0.00 0.00 0.00 0.00 00:38:44.639 [2024-12-14T15:52:14.725Z] =================================================================================================================== 00:38:44.639 [2024-12-14T15:52:14.725Z] Total : 23458.33 91.63 0.00 0.00 0.00 0.00 0.00 00:38:44.639 00:38:45.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:45.580 Nvme0n1 : 7.00 23499.86 91.80 0.00 0.00 0.00 0.00 0.00 00:38:45.580 [2024-12-14T15:52:15.666Z] =================================================================================================================== 00:38:45.580 [2024-12-14T15:52:15.666Z] Total : 23499.86 91.80 0.00 0.00 0.00 0.00 0.00 00:38:45.580 00:38:46.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:46.518 Nvme0n1 : 8.00 23531.00 91.92 0.00 0.00 0.00 0.00 0.00 00:38:46.518 [2024-12-14T15:52:16.604Z] =================================================================================================================== 00:38:46.518 [2024-12-14T15:52:16.604Z] Total : 23531.00 91.92 0.00 0.00 0.00 0.00 0.00 00:38:46.518 00:38:47.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:47.565 Nvme0n1 : 9.00 23541.11 91.96 0.00 0.00 0.00 0.00 0.00 00:38:47.565 [2024-12-14T15:52:17.651Z] =================================================================================================================== 00:38:47.565 [2024-12-14T15:52:17.651Z] Total : 23541.11 91.96 0.00 0.00 0.00 0.00 0.00 00:38:47.565 00:38:48.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:48.501 Nvme0n1 : 10.00 23561.90 92.04 0.00 0.00 0.00 0.00 0.00 00:38:48.501 [2024-12-14T15:52:18.587Z] =================================================================================================================== 00:38:48.501 [2024-12-14T15:52:18.587Z] Total : 23561.90 92.04 0.00 0.00 0.00 0.00 0.00 00:38:48.501 00:38:48.501 00:38:48.501 Latency(us) 00:38:48.501 [2024-12-14T15:52:18.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:48.501 Nvme0n1 : 10.00 23565.46 92.05 0.00 0.00 5428.96 3229.99 25215.76 00:38:48.501 [2024-12-14T15:52:18.587Z] =================================================================================================================== 00:38:48.501 [2024-12-14T15:52:18.587Z] Total : 23565.46 92.05 0.00 0.00 5428.96 3229.99 25215.76 00:38:48.501 { 00:38:48.501 "results": [ 00:38:48.501 { 00:38:48.501 "job": "Nvme0n1", 00:38:48.501 "core_mask": "0x2", 00:38:48.501 "workload": "randwrite", 00:38:48.501 "status": "finished", 00:38:48.501 "queue_depth": 128, 00:38:48.501 "io_size": 4096, 00:38:48.501 "runtime": 10.003923, 00:38:48.501 "iops": 23565.455271896833, 00:38:48.501 "mibps": 92.052559655847, 00:38:48.501 "io_failed": 0, 00:38:48.501 "io_timeout": 0, 00:38:48.501 "avg_latency_us": 5428.96492831803, 00:38:48.501 "min_latency_us": 3229.9885714285715, 00:38:48.501 "max_latency_us": 25215.75619047619 00:38:48.501 } 00:38:48.501 ], 00:38:48.501 "core_count": 1 00:38:48.501 } 00:38:48.501 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1245838 00:38:48.501 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1245838 ']' 00:38:48.501 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1245838 00:38:48.501 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:48.501 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:48.501 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1245838 00:38:48.501 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:48.501 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:48.501 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1245838' 00:38:48.501 killing process with pid 1245838 00:38:48.501 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1245838 00:38:48.501 Received shutdown signal, test time was about 10.000000 seconds 00:38:48.501 00:38:48.501 Latency(us) 00:38:48.501 [2024-12-14T15:52:18.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.501 [2024-12-14T15:52:18.587Z] =================================================================================================================== 00:38:48.501 [2024-12-14T15:52:18.587Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:48.501 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1245838 00:38:48.760 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:49.019 16:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:49.019 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:49.019 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1242900 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1242900 00:38:49.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1242900 Killed "${NVMF_APP[@]}" "$@" 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1247693 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1247693 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1247693 ']' 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:49.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:49.277 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:49.536 [2024-12-14 16:52:19.387935] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:49.536 [2024-12-14 16:52:19.388846] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:49.536 [2024-12-14 16:52:19.388880] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:49.536 [2024-12-14 16:52:19.467380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.536 [2024-12-14 16:52:19.488249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:49.536 [2024-12-14 16:52:19.488285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:49.536 [2024-12-14 16:52:19.488292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:49.536 [2024-12-14 16:52:19.488298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:49.536 [2024-12-14 16:52:19.488303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:49.536 [2024-12-14 16:52:19.488793] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.536 [2024-12-14 16:52:19.550428] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:49.536 [2024-12-14 16:52:19.550646] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:49.536 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:49.536 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:49.536 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:49.536 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:49.536 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:49.536 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:49.536 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:49.795 [2024-12-14 16:52:19.786120] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:49.795 [2024-12-14 16:52:19.786321] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:49.795 [2024-12-14 16:52:19.786404] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:49.795 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:49.795 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 732d5424-c76d-4386-9140-906e80697f16 00:38:49.795 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=732d5424-c76d-4386-9140-906e80697f16 00:38:49.796 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:49.796 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:49.796 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:49.796 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:49.796 16:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:50.054 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 732d5424-c76d-4386-9140-906e80697f16 -t 2000 00:38:50.313 [ 00:38:50.313 { 00:38:50.313 "name": "732d5424-c76d-4386-9140-906e80697f16", 00:38:50.313 "aliases": [ 00:38:50.313 "lvs/lvol" 00:38:50.313 ], 00:38:50.313 "product_name": "Logical Volume", 00:38:50.313 "block_size": 4096, 00:38:50.313 "num_blocks": 38912, 00:38:50.313 "uuid": "732d5424-c76d-4386-9140-906e80697f16", 00:38:50.313 "assigned_rate_limits": { 00:38:50.313 "rw_ios_per_sec": 0, 00:38:50.313 "rw_mbytes_per_sec": 0, 00:38:50.313 "r_mbytes_per_sec": 0, 00:38:50.313 "w_mbytes_per_sec": 0 00:38:50.313 }, 00:38:50.313 "claimed": false, 00:38:50.313 "zoned": false, 00:38:50.313 "supported_io_types": { 00:38:50.313 "read": true, 00:38:50.313 "write": true, 00:38:50.313 "unmap": true, 00:38:50.313 "flush": false, 00:38:50.313 "reset": true, 00:38:50.313 "nvme_admin": false, 00:38:50.313 "nvme_io": false, 00:38:50.313 "nvme_io_md": false, 00:38:50.313 "write_zeroes": true, 00:38:50.313 "zcopy": false, 00:38:50.313 "get_zone_info": false, 00:38:50.313 "zone_management": false, 00:38:50.313 "zone_append": false, 00:38:50.313 "compare": false, 00:38:50.313 "compare_and_write": false, 00:38:50.313 "abort": false, 00:38:50.313 "seek_hole": true, 00:38:50.313 "seek_data": true, 00:38:50.313 "copy": false, 00:38:50.313 "nvme_iov_md": false 00:38:50.313 }, 00:38:50.313 "driver_specific": { 00:38:50.313 "lvol": { 00:38:50.313 "lvol_store_uuid": "21f0bb00-b78f-421f-81cc-d6c4bd42abbb", 00:38:50.313 "base_bdev": "aio_bdev", 00:38:50.313 "thin_provision": false, 00:38:50.313 "num_allocated_clusters": 38, 00:38:50.313 "snapshot": false, 00:38:50.313 "clone": false, 00:38:50.313 "esnap_clone": false 00:38:50.313 } 00:38:50.313 } 00:38:50.313 } 00:38:50.313 ] 00:38:50.313 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:50.313 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:50.313 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:50.572 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:50.572 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:50.572 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:50.572 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:50.572 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:50.831 [2024-12-14 16:52:20.761243] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:50.831 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:51.090 request: 00:38:51.090 { 00:38:51.090 "uuid": "21f0bb00-b78f-421f-81cc-d6c4bd42abbb", 00:38:51.090 "method": "bdev_lvol_get_lvstores", 00:38:51.090 "req_id": 1 00:38:51.090 } 00:38:51.090 Got JSON-RPC error response 00:38:51.090 response: 00:38:51.090 { 00:38:51.090 "code": -19, 00:38:51.090 "message": "No such device" 00:38:51.090 } 00:38:51.090 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:51.090 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:51.090 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:51.090 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:51.090 16:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:51.349 aio_bdev 00:38:51.349 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 732d5424-c76d-4386-9140-906e80697f16 00:38:51.349 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=732d5424-c76d-4386-9140-906e80697f16 00:38:51.349 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:51.349 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:51.349 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:51.349 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:51.349 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:51.349 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 732d5424-c76d-4386-9140-906e80697f16 -t 2000 00:38:51.608 [ 00:38:51.608 { 00:38:51.608 "name": "732d5424-c76d-4386-9140-906e80697f16", 00:38:51.608 "aliases": [ 00:38:51.608 "lvs/lvol" 00:38:51.608 ], 00:38:51.608 "product_name": "Logical Volume", 00:38:51.608 "block_size": 4096, 00:38:51.608 "num_blocks": 38912, 00:38:51.608 "uuid": "732d5424-c76d-4386-9140-906e80697f16", 00:38:51.608 "assigned_rate_limits": { 00:38:51.608 "rw_ios_per_sec": 0, 00:38:51.608 "rw_mbytes_per_sec": 0, 00:38:51.608 "r_mbytes_per_sec": 0, 00:38:51.608 "w_mbytes_per_sec": 0 00:38:51.608 }, 00:38:51.608 "claimed": false, 00:38:51.608 "zoned": false, 00:38:51.608 "supported_io_types": { 00:38:51.608 "read": true, 00:38:51.608 "write": true, 00:38:51.608 "unmap": true, 00:38:51.608 "flush": false, 00:38:51.608 "reset": true, 00:38:51.608 "nvme_admin": false, 00:38:51.608 "nvme_io": false, 00:38:51.608 "nvme_io_md": false, 00:38:51.608 "write_zeroes": true, 00:38:51.608 "zcopy": false, 00:38:51.608 "get_zone_info": false, 00:38:51.608 "zone_management": false, 00:38:51.608 "zone_append": false, 00:38:51.608 "compare": false, 00:38:51.608 "compare_and_write": false, 00:38:51.608 "abort": false, 00:38:51.608 "seek_hole": true, 00:38:51.608 "seek_data": true, 00:38:51.608 "copy": false, 00:38:51.608 "nvme_iov_md": false 00:38:51.608 }, 00:38:51.608 "driver_specific": { 00:38:51.608 "lvol": { 00:38:51.608 "lvol_store_uuid": "21f0bb00-b78f-421f-81cc-d6c4bd42abbb", 00:38:51.608 "base_bdev": "aio_bdev", 00:38:51.608 "thin_provision": false, 00:38:51.608 "num_allocated_clusters": 38, 00:38:51.608 "snapshot": false, 00:38:51.608 "clone": false, 00:38:51.608 "esnap_clone": false 00:38:51.608 } 00:38:51.608 } 00:38:51.608 } 00:38:51.608 ] 00:38:51.608 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:51.608 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:51.608 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:51.867 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:51.867 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:51.867 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:52.126 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:52.126 16:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 732d5424-c76d-4386-9140-906e80697f16 00:38:52.126 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21f0bb00-b78f-421f-81cc-d6c4bd42abbb 00:38:52.385 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:52.645 00:38:52.645 real 0m16.985s 00:38:52.645 user 0m34.338s 00:38:52.645 sys 0m3.910s 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:52.645 ************************************ 00:38:52.645 END TEST lvs_grow_dirty 00:38:52.645 ************************************ 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:52.645 nvmf_trace.0 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:52.645 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:52.645 rmmod nvme_tcp 00:38:52.645 rmmod nvme_fabrics 00:38:52.645 rmmod nvme_keyring 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1247693 ']' 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1247693 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1247693 ']' 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1247693 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1247693 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1247693' 00:38:52.904 killing process with pid 1247693 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1247693 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1247693 00:38:52.904 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:52.905 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:52.905 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:52.905 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:52.905 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:52.905 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:52.905 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:53.163 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:53.163 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:53.163 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:53.163 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:53.163 16:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.067 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:55.067 00:38:55.067 real 0m41.821s 00:38:55.067 user 0m52.113s 00:38:55.067 sys 0m10.140s 00:38:55.067 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:55.067 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:55.067 ************************************ 00:38:55.067 END TEST nvmf_lvs_grow 00:38:55.067 ************************************ 00:38:55.067 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:55.067 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:55.067 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:55.067 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:55.067 ************************************ 00:38:55.067 START TEST nvmf_bdev_io_wait 00:38:55.067 ************************************ 00:38:55.067 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:55.327 * Looking for test storage... 00:38:55.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:55.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.327 --rc genhtml_branch_coverage=1 00:38:55.327 --rc genhtml_function_coverage=1 00:38:55.327 --rc genhtml_legend=1 00:38:55.327 --rc geninfo_all_blocks=1 00:38:55.327 --rc geninfo_unexecuted_blocks=1 00:38:55.327 00:38:55.327 ' 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:55.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.327 --rc genhtml_branch_coverage=1 00:38:55.327 --rc genhtml_function_coverage=1 00:38:55.327 --rc genhtml_legend=1 00:38:55.327 --rc geninfo_all_blocks=1 00:38:55.327 --rc geninfo_unexecuted_blocks=1 00:38:55.327 00:38:55.327 ' 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:55.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.327 --rc genhtml_branch_coverage=1 00:38:55.327 --rc genhtml_function_coverage=1 00:38:55.327 --rc genhtml_legend=1 00:38:55.327 --rc geninfo_all_blocks=1 00:38:55.327 --rc geninfo_unexecuted_blocks=1 00:38:55.327 00:38:55.327 ' 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:55.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.327 --rc genhtml_branch_coverage=1 00:38:55.327 --rc genhtml_function_coverage=1 00:38:55.327 --rc genhtml_legend=1 00:38:55.327 --rc geninfo_all_blocks=1 00:38:55.327 --rc geninfo_unexecuted_blocks=1 00:38:55.327 00:38:55.327 ' 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.327 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:55.328 16:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:01.897 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:01.897 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:01.897 Found net devices under 0000:af:00.0: cvl_0_0 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:01.897 Found net devices under 0000:af:00.1: cvl_0_1 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:01.897 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:01.898 16:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:01.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:01.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:39:01.898 00:39:01.898 --- 10.0.0.2 ping statistics --- 00:39:01.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.898 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:01.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:01.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:39:01.898 00:39:01.898 --- 10.0.0.1 ping statistics --- 00:39:01.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.898 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1251670 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1251670 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1251670 ']' 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:01.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.898 [2024-12-14 16:52:31.279579] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:01.898 [2024-12-14 16:52:31.280486] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:01.898 [2024-12-14 16:52:31.280519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:01.898 [2024-12-14 16:52:31.356094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:01.898 [2024-12-14 16:52:31.380075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:01.898 [2024-12-14 16:52:31.380114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:01.898 [2024-12-14 16:52:31.380121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:01.898 [2024-12-14 16:52:31.380127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:01.898 [2024-12-14 16:52:31.380132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:01.898 [2024-12-14 16:52:31.381545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.898 [2024-12-14 16:52:31.381660] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:01.898 [2024-12-14 16:52:31.381693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.898 [2024-12-14 16:52:31.381694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:01.898 [2024-12-14 16:52:31.382121] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.898 [2024-12-14 16:52:31.534647] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:01.898 [2024-12-14 16:52:31.535407] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:01.898 [2024-12-14 16:52:31.535563] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:01.898 [2024-12-14 16:52:31.535708] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.898 [2024-12-14 16:52:31.546500] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.898 Malloc0 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:01.898 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.899 [2024-12-14 16:52:31.618800] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1251700 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1251702 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.899 { 00:39:01.899 "params": { 00:39:01.899 "name": "Nvme$subsystem", 00:39:01.899 "trtype": "$TEST_TRANSPORT", 00:39:01.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.899 "adrfam": "ipv4", 00:39:01.899 "trsvcid": "$NVMF_PORT", 00:39:01.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.899 "hdgst": ${hdgst:-false}, 00:39:01.899 "ddgst": ${ddgst:-false} 00:39:01.899 }, 00:39:01.899 "method": "bdev_nvme_attach_controller" 00:39:01.899 } 00:39:01.899 EOF 00:39:01.899 )") 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1251704 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.899 { 00:39:01.899 "params": { 00:39:01.899 "name": "Nvme$subsystem", 00:39:01.899 "trtype": "$TEST_TRANSPORT", 00:39:01.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.899 "adrfam": "ipv4", 00:39:01.899 "trsvcid": "$NVMF_PORT", 00:39:01.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.899 "hdgst": ${hdgst:-false}, 00:39:01.899 "ddgst": ${ddgst:-false} 00:39:01.899 }, 00:39:01.899 "method": "bdev_nvme_attach_controller" 00:39:01.899 } 00:39:01.899 EOF 00:39:01.899 )") 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1251707 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.899 { 00:39:01.899 "params": { 00:39:01.899 "name": "Nvme$subsystem", 00:39:01.899 "trtype": "$TEST_TRANSPORT", 00:39:01.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.899 "adrfam": "ipv4", 00:39:01.899 "trsvcid": "$NVMF_PORT", 00:39:01.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.899 "hdgst": ${hdgst:-false}, 00:39:01.899 "ddgst": ${ddgst:-false} 00:39:01.899 }, 00:39:01.899 "method": "bdev_nvme_attach_controller" 00:39:01.899 } 00:39:01.899 EOF 00:39:01.899 )") 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.899 { 00:39:01.899 "params": { 00:39:01.899 "name": "Nvme$subsystem", 00:39:01.899 "trtype": "$TEST_TRANSPORT", 00:39:01.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.899 "adrfam": "ipv4", 00:39:01.899 "trsvcid": "$NVMF_PORT", 00:39:01.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.899 "hdgst": ${hdgst:-false}, 00:39:01.899 "ddgst": ${ddgst:-false} 00:39:01.899 }, 00:39:01.899 "method": "bdev_nvme_attach_controller" 00:39:01.899 } 00:39:01.899 EOF 00:39:01.899 )") 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1251700 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:01.899 "params": { 00:39:01.899 "name": "Nvme1", 00:39:01.899 "trtype": "tcp", 00:39:01.899 "traddr": "10.0.0.2", 00:39:01.899 "adrfam": "ipv4", 00:39:01.899 "trsvcid": "4420", 00:39:01.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:01.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:01.899 "hdgst": false, 00:39:01.899 "ddgst": false 00:39:01.899 }, 00:39:01.899 "method": "bdev_nvme_attach_controller" 00:39:01.899 }' 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:01.899 "params": { 00:39:01.899 "name": "Nvme1", 00:39:01.899 "trtype": "tcp", 00:39:01.899 "traddr": "10.0.0.2", 00:39:01.899 "adrfam": "ipv4", 00:39:01.899 "trsvcid": "4420", 00:39:01.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:01.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:01.899 "hdgst": false, 00:39:01.899 "ddgst": false 00:39:01.899 }, 00:39:01.899 "method": "bdev_nvme_attach_controller" 00:39:01.899 }' 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:01.899 "params": { 00:39:01.899 "name": "Nvme1", 00:39:01.899 "trtype": "tcp", 00:39:01.899 "traddr": "10.0.0.2", 00:39:01.899 "adrfam": "ipv4", 00:39:01.899 "trsvcid": "4420", 00:39:01.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:01.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:01.899 "hdgst": false, 00:39:01.899 "ddgst": false 00:39:01.899 }, 00:39:01.899 "method": "bdev_nvme_attach_controller" 00:39:01.899 }' 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:01.899 16:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:01.899 "params": { 00:39:01.899 "name": "Nvme1", 00:39:01.899 "trtype": "tcp", 00:39:01.899 "traddr": "10.0.0.2", 00:39:01.899 "adrfam": "ipv4", 00:39:01.899 "trsvcid": "4420", 00:39:01.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:01.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:01.899 "hdgst": false, 00:39:01.899 "ddgst": false 00:39:01.899 }, 00:39:01.899 "method": "bdev_nvme_attach_controller" 00:39:01.900 }' 00:39:01.900 [2024-12-14 16:52:31.669199] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:01.900 [2024-12-14 16:52:31.669245] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:01.900 [2024-12-14 16:52:31.672512] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:01.900 [2024-12-14 16:52:31.672561] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:39:01.900 [2024-12-14 16:52:31.674158] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:01.900 [2024-12-14 16:52:31.674201] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:39:01.900 [2024-12-14 16:52:31.674606] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:01.900 [2024-12-14 16:52:31.674647] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:01.900 [2024-12-14 16:52:31.849606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.900 [2024-12-14 16:52:31.866946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:39:01.900 [2024-12-14 16:52:31.948114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.900 [2024-12-14 16:52:31.967287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:39:02.158 [2024-12-14 16:52:32.000927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.158 [2024-12-14 16:52:32.016595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:39:02.158 [2024-12-14 16:52:32.052291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.158 [2024-12-14 16:52:32.068210] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:39:02.158 Running I/O for 1 seconds... 00:39:02.158 Running I/O for 1 seconds... 00:39:02.158 Running I/O for 1 seconds... 00:39:02.158 Running I/O for 1 seconds... 00:39:03.093 14249.00 IOPS, 55.66 MiB/s 00:39:03.093 Latency(us) 00:39:03.093 [2024-12-14T15:52:33.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.093 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:03.093 Nvme1n1 : 1.01 14312.32 55.91 0.00 0.00 8917.89 1505.77 10236.10 00:39:03.093 [2024-12-14T15:52:33.179Z] =================================================================================================================== 00:39:03.093 [2024-12-14T15:52:33.179Z] Total : 14312.32 55.91 0.00 0.00 8917.89 1505.77 10236.10 00:39:03.351 6330.00 IOPS, 24.73 MiB/s 00:39:03.351 Latency(us) 00:39:03.351 [2024-12-14T15:52:33.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.351 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:03.351 Nvme1n1 : 1.01 6384.64 24.94 0.00 0.00 19895.92 4556.31 25964.74 00:39:03.351 [2024-12-14T15:52:33.437Z] =================================================================================================================== 00:39:03.351 [2024-12-14T15:52:33.437Z] Total : 6384.64 24.94 0.00 0.00 19895.92 4556.31 25964.74 00:39:03.351 243928.00 IOPS, 952.84 MiB/s 00:39:03.351 Latency(us) 00:39:03.351 [2024-12-14T15:52:33.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.351 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:03.351 Nvme1n1 : 1.00 243562.86 951.42 0.00 0.00 522.96 222.35 1490.16 00:39:03.351 [2024-12-14T15:52:33.437Z] =================================================================================================================== 00:39:03.351 [2024-12-14T15:52:33.437Z] Total : 243562.86 951.42 0.00 0.00 522.96 222.35 1490.16 00:39:03.351 6322.00 IOPS, 24.70 MiB/s 00:39:03.351 Latency(us) 00:39:03.351 [2024-12-14T15:52:33.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.351 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:03.351 Nvme1n1 : 1.01 6420.86 25.08 0.00 0.00 19880.54 4462.69 34702.87 00:39:03.351 [2024-12-14T15:52:33.437Z] =================================================================================================================== 00:39:03.351 [2024-12-14T15:52:33.438Z] Total : 6420.86 25.08 0.00 0.00 19880.54 4462.69 34702.87 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1251702 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1251704 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1251707 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:03.352 rmmod nvme_tcp 00:39:03.352 rmmod nvme_fabrics 00:39:03.352 rmmod nvme_keyring 00:39:03.352 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1251670 ']' 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1251670 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1251670 ']' 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1251670 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1251670 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1251670' 00:39:03.610 killing process with pid 1251670 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1251670 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1251670 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:03.610 16:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.144 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:06.145 00:39:06.145 real 0m10.584s 00:39:06.145 user 0m14.285s 00:39:06.145 sys 0m6.243s 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:06.145 ************************************ 00:39:06.145 END TEST nvmf_bdev_io_wait 00:39:06.145 ************************************ 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:06.145 ************************************ 00:39:06.145 START TEST nvmf_queue_depth 00:39:06.145 ************************************ 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:06.145 * Looking for test storage... 00:39:06.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:06.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.145 --rc genhtml_branch_coverage=1 00:39:06.145 --rc genhtml_function_coverage=1 00:39:06.145 --rc genhtml_legend=1 00:39:06.145 --rc geninfo_all_blocks=1 00:39:06.145 --rc geninfo_unexecuted_blocks=1 00:39:06.145 00:39:06.145 ' 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:06.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.145 --rc genhtml_branch_coverage=1 00:39:06.145 --rc genhtml_function_coverage=1 00:39:06.145 --rc genhtml_legend=1 00:39:06.145 --rc geninfo_all_blocks=1 00:39:06.145 --rc geninfo_unexecuted_blocks=1 00:39:06.145 00:39:06.145 ' 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:06.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.145 --rc genhtml_branch_coverage=1 00:39:06.145 --rc genhtml_function_coverage=1 00:39:06.145 --rc genhtml_legend=1 00:39:06.145 --rc geninfo_all_blocks=1 00:39:06.145 --rc geninfo_unexecuted_blocks=1 00:39:06.145 00:39:06.145 ' 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:06.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.145 --rc genhtml_branch_coverage=1 00:39:06.145 --rc genhtml_function_coverage=1 00:39:06.145 --rc genhtml_legend=1 00:39:06.145 --rc geninfo_all_blocks=1 00:39:06.145 --rc geninfo_unexecuted_blocks=1 00:39:06.145 00:39:06.145 ' 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:06.145 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:06.146 16:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.146 16:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:06.146 16:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:06.146 16:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:06.146 16:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:12.716 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:12.717 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:12.717 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:12.717 Found net devices under 0000:af:00.0: cvl_0_0 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:12.717 Found net devices under 0000:af:00.1: cvl_0_1 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:12.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:12.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:39:12.717 00:39:12.717 --- 10.0.0.2 ping statistics --- 00:39:12.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:12.717 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:12.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:12.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:39:12.717 00:39:12.717 --- 10.0.0.1 ping statistics --- 00:39:12.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:12.717 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:12.717 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1255432 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1255432 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1255432 ']' 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:12.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:12.718 16:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.718 [2024-12-14 16:52:42.000977] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:12.718 [2024-12-14 16:52:42.001963] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:12.718 [2024-12-14 16:52:42.001999] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:12.718 [2024-12-14 16:52:42.082121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.718 [2024-12-14 16:52:42.103278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:12.718 [2024-12-14 16:52:42.103315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:12.718 [2024-12-14 16:52:42.103322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:12.718 [2024-12-14 16:52:42.103327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:12.718 [2024-12-14 16:52:42.103332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:12.718 [2024-12-14 16:52:42.103811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:12.718 [2024-12-14 16:52:42.165236] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:12.718 [2024-12-14 16:52:42.165452] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.718 [2024-12-14 16:52:42.232464] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.718 Malloc0 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.718 [2024-12-14 16:52:42.300588] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1255631 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1255631 /var/tmp/bdevperf.sock 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1255631 ']' 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:12.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.718 [2024-12-14 16:52:42.350867] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:12.718 [2024-12-14 16:52:42.350907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255631 ] 00:39:12.718 [2024-12-14 16:52:42.425058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.718 [2024-12-14 16:52:42.447132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.718 NVMe0n1 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.718 16:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:12.718 Running I/O for 10 seconds... 00:39:14.664 12229.00 IOPS, 47.77 MiB/s [2024-12-14T15:52:46.126Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-14T15:52:47.063Z] 12333.33 IOPS, 48.18 MiB/s [2024-12-14T15:52:48.000Z] 12457.25 IOPS, 48.66 MiB/s [2024-12-14T15:52:48.936Z] 12478.40 IOPS, 48.74 MiB/s [2024-12-14T15:52:49.872Z] 12466.00 IOPS, 48.70 MiB/s [2024-12-14T15:52:50.810Z] 12505.57 IOPS, 48.85 MiB/s [2024-12-14T15:52:52.188Z] 12536.50 IOPS, 48.97 MiB/s [2024-12-14T15:52:53.125Z] 12535.56 IOPS, 48.97 MiB/s [2024-12-14T15:52:53.125Z] 12572.60 IOPS, 49.11 MiB/s 00:39:23.039 Latency(us) 00:39:23.039 [2024-12-14T15:52:53.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:23.039 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:23.039 Verification LBA range: start 0x0 length 0x4000 00:39:23.039 NVMe0n1 : 10.06 12588.56 49.17 0.00 0.00 81069.72 18724.57 51929.48 00:39:23.039 [2024-12-14T15:52:53.125Z] =================================================================================================================== 00:39:23.039 [2024-12-14T15:52:53.125Z] Total : 12588.56 49.17 0.00 0.00 81069.72 18724.57 51929.48 00:39:23.039 { 00:39:23.039 "results": [ 00:39:23.039 { 00:39:23.039 "job": "NVMe0n1", 00:39:23.039 "core_mask": "0x1", 00:39:23.039 "workload": "verify", 00:39:23.039 "status": "finished", 00:39:23.039 "verify_range": { 00:39:23.039 "start": 0, 00:39:23.039 "length": 16384 00:39:23.039 }, 00:39:23.039 "queue_depth": 1024, 00:39:23.039 "io_size": 4096, 00:39:23.039 "runtime": 10.064854, 00:39:23.039 "iops": 12588.558164877504, 00:39:23.039 "mibps": 49.17405533155275, 00:39:23.039 "io_failed": 0, 00:39:23.039 "io_timeout": 0, 00:39:23.039 "avg_latency_us": 81069.72230801784, 00:39:23.039 "min_latency_us": 18724.571428571428, 00:39:23.039 "max_latency_us": 51929.4780952381 00:39:23.039 } 00:39:23.039 ], 00:39:23.039 "core_count": 1 00:39:23.039 } 00:39:23.039 16:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1255631 00:39:23.039 16:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1255631 ']' 00:39:23.039 16:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1255631 00:39:23.040 16:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:23.040 16:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:23.040 16:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1255631 00:39:23.040 16:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:23.040 16:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:23.040 16:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1255631' 00:39:23.040 killing process with pid 1255631 00:39:23.040 16:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1255631 00:39:23.040 Received shutdown signal, test time was about 10.000000 seconds 00:39:23.040 00:39:23.040 Latency(us) 00:39:23.040 [2024-12-14T15:52:53.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:23.040 [2024-12-14T15:52:53.126Z] =================================================================================================================== 00:39:23.040 [2024-12-14T15:52:53.126Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:23.040 16:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1255631 00:39:23.040 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:23.040 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:23.040 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:23.040 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:23.040 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:23.040 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:23.040 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:23.040 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:23.040 rmmod nvme_tcp 00:39:23.040 rmmod nvme_fabrics 00:39:23.040 rmmod nvme_keyring 00:39:23.040 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1255432 ']' 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1255432 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1255432 ']' 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1255432 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1255432 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1255432' 00:39:23.299 killing process with pid 1255432 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1255432 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1255432 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:23.299 16:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:25.836 00:39:25.836 real 0m19.641s 00:39:25.836 user 0m22.603s 00:39:25.836 sys 0m6.205s 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:25.836 ************************************ 00:39:25.836 END TEST nvmf_queue_depth 00:39:25.836 ************************************ 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:25.836 ************************************ 00:39:25.836 START TEST nvmf_target_multipath 00:39:25.836 ************************************ 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:25.836 * Looking for test storage... 00:39:25.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:25.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.836 --rc genhtml_branch_coverage=1 00:39:25.836 --rc genhtml_function_coverage=1 00:39:25.836 --rc genhtml_legend=1 00:39:25.836 --rc geninfo_all_blocks=1 00:39:25.836 --rc geninfo_unexecuted_blocks=1 00:39:25.836 00:39:25.836 ' 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:25.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.836 --rc genhtml_branch_coverage=1 00:39:25.836 --rc genhtml_function_coverage=1 00:39:25.836 --rc genhtml_legend=1 00:39:25.836 --rc geninfo_all_blocks=1 00:39:25.836 --rc geninfo_unexecuted_blocks=1 00:39:25.836 00:39:25.836 ' 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:25.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.836 --rc genhtml_branch_coverage=1 00:39:25.836 --rc genhtml_function_coverage=1 00:39:25.836 --rc genhtml_legend=1 00:39:25.836 --rc geninfo_all_blocks=1 00:39:25.836 --rc geninfo_unexecuted_blocks=1 00:39:25.836 00:39:25.836 ' 00:39:25.836 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:25.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.836 --rc genhtml_branch_coverage=1 00:39:25.837 --rc genhtml_function_coverage=1 00:39:25.837 --rc genhtml_legend=1 00:39:25.837 --rc geninfo_all_blocks=1 00:39:25.837 --rc geninfo_unexecuted_blocks=1 00:39:25.837 00:39:25.837 ' 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:25.837 16:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:32.411 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:32.411 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:32.411 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:32.412 Found net devices under 0000:af:00.0: cvl_0_0 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:32.412 Found net devices under 0000:af:00.1: cvl_0_1 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:32.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:32.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:39:32.412 00:39:32.412 --- 10.0.0.2 ping statistics --- 00:39:32.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.412 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:32.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:32.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:39:32.412 00:39:32.412 --- 10.0.0.1 ping statistics --- 00:39:32.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.412 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:32.412 only one NIC for nvmf test 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:32.412 rmmod nvme_tcp 00:39:32.412 rmmod nvme_fabrics 00:39:32.412 rmmod nvme_keyring 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:32.412 16:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:33.791 00:39:33.791 real 0m8.292s 00:39:33.791 user 0m1.781s 00:39:33.791 sys 0m4.495s 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:33.791 ************************************ 00:39:33.791 END TEST nvmf_target_multipath 00:39:33.791 ************************************ 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:33.791 ************************************ 00:39:33.791 START TEST nvmf_zcopy 00:39:33.791 ************************************ 00:39:33.791 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:34.051 * Looking for test storage... 00:39:34.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:34.051 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:34.051 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:39:34.051 16:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:34.051 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:34.051 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:34.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.052 --rc genhtml_branch_coverage=1 00:39:34.052 --rc genhtml_function_coverage=1 00:39:34.052 --rc genhtml_legend=1 00:39:34.052 --rc geninfo_all_blocks=1 00:39:34.052 --rc geninfo_unexecuted_blocks=1 00:39:34.052 00:39:34.052 ' 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:34.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.052 --rc genhtml_branch_coverage=1 00:39:34.052 --rc genhtml_function_coverage=1 00:39:34.052 --rc genhtml_legend=1 00:39:34.052 --rc geninfo_all_blocks=1 00:39:34.052 --rc geninfo_unexecuted_blocks=1 00:39:34.052 00:39:34.052 ' 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:34.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.052 --rc genhtml_branch_coverage=1 00:39:34.052 --rc genhtml_function_coverage=1 00:39:34.052 --rc genhtml_legend=1 00:39:34.052 --rc geninfo_all_blocks=1 00:39:34.052 --rc geninfo_unexecuted_blocks=1 00:39:34.052 00:39:34.052 ' 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:34.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.052 --rc genhtml_branch_coverage=1 00:39:34.052 --rc genhtml_function_coverage=1 00:39:34.052 --rc genhtml_legend=1 00:39:34.052 --rc geninfo_all_blocks=1 00:39:34.052 --rc geninfo_unexecuted_blocks=1 00:39:34.052 00:39:34.052 ' 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:34.052 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:34.053 16:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:40.622 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:40.622 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:40.622 Found net devices under 0000:af:00.0: cvl_0_0 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:40.622 Found net devices under 0000:af:00.1: cvl_0_1 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.622 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:40.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:40.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:39:40.623 00:39:40.623 --- 10.0.0.2 ping statistics --- 00:39:40.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.623 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:40.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:40.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:39:40.623 00:39:40.623 --- 10.0.0.1 ping statistics --- 00:39:40.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.623 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1264124 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1264124 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1264124 ']' 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:40.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:40.623 16:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.623 [2024-12-14 16:53:10.026429] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:40.623 [2024-12-14 16:53:10.027397] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:40.623 [2024-12-14 16:53:10.027433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:40.623 [2024-12-14 16:53:10.105552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.623 [2024-12-14 16:53:10.127025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:40.623 [2024-12-14 16:53:10.127060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:40.623 [2024-12-14 16:53:10.127067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:40.623 [2024-12-14 16:53:10.127073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:40.623 [2024-12-14 16:53:10.127078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:40.623 [2024-12-14 16:53:10.127537] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.623 [2024-12-14 16:53:10.190985] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:40.623 [2024-12-14 16:53:10.191184] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.623 [2024-12-14 16:53:10.260218] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.623 [2024-12-14 16:53:10.288433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.623 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.624 malloc0 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:40.624 { 00:39:40.624 "params": { 00:39:40.624 "name": "Nvme$subsystem", 00:39:40.624 "trtype": "$TEST_TRANSPORT", 00:39:40.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.624 "adrfam": "ipv4", 00:39:40.624 "trsvcid": "$NVMF_PORT", 00:39:40.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.624 "hdgst": ${hdgst:-false}, 00:39:40.624 "ddgst": ${ddgst:-false} 00:39:40.624 }, 00:39:40.624 "method": "bdev_nvme_attach_controller" 00:39:40.624 } 00:39:40.624 EOF 00:39:40.624 )") 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:40.624 16:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:40.624 "params": { 00:39:40.624 "name": "Nvme1", 00:39:40.624 "trtype": "tcp", 00:39:40.624 "traddr": "10.0.0.2", 00:39:40.624 "adrfam": "ipv4", 00:39:40.624 "trsvcid": "4420", 00:39:40.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:40.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:40.624 "hdgst": false, 00:39:40.624 "ddgst": false 00:39:40.624 }, 00:39:40.624 "method": "bdev_nvme_attach_controller" 00:39:40.624 }' 00:39:40.624 [2024-12-14 16:53:10.383457] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:40.624 [2024-12-14 16:53:10.383513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264154 ] 00:39:40.624 [2024-12-14 16:53:10.459559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.624 [2024-12-14 16:53:10.482182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.624 Running I/O for 10 seconds... 00:39:42.937 8542.00 IOPS, 66.73 MiB/s [2024-12-14T15:53:13.959Z] 8585.50 IOPS, 67.07 MiB/s [2024-12-14T15:53:14.895Z] 8614.33 IOPS, 67.30 MiB/s [2024-12-14T15:53:15.871Z] 8627.75 IOPS, 67.40 MiB/s [2024-12-14T15:53:16.914Z] 8636.80 IOPS, 67.47 MiB/s [2024-12-14T15:53:17.850Z] 8632.67 IOPS, 67.44 MiB/s [2024-12-14T15:53:18.786Z] 8638.29 IOPS, 67.49 MiB/s [2024-12-14T15:53:19.722Z] 8642.38 IOPS, 67.52 MiB/s [2024-12-14T15:53:21.099Z] 8645.22 IOPS, 67.54 MiB/s [2024-12-14T15:53:21.099Z] 8629.60 IOPS, 67.42 MiB/s 00:39:51.013 Latency(us) 00:39:51.013 [2024-12-14T15:53:21.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.013 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:51.013 Verification LBA range: start 0x0 length 0x1000 00:39:51.013 Nvme1n1 : 10.05 8596.08 67.16 0.00 0.00 14792.69 2793.08 44938.97 00:39:51.013 [2024-12-14T15:53:21.099Z] =================================================================================================================== 00:39:51.013 [2024-12-14T15:53:21.099Z] Total : 8596.08 67.16 0.00 0.00 14792.69 2793.08 44938.97 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1265718 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:51.013 { 00:39:51.013 "params": { 00:39:51.013 "name": "Nvme$subsystem", 00:39:51.013 "trtype": "$TEST_TRANSPORT", 00:39:51.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.013 "adrfam": "ipv4", 00:39:51.013 "trsvcid": "$NVMF_PORT", 00:39:51.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.013 "hdgst": ${hdgst:-false}, 00:39:51.013 "ddgst": ${ddgst:-false} 00:39:51.013 }, 00:39:51.013 "method": "bdev_nvme_attach_controller" 00:39:51.013 } 00:39:51.013 EOF 00:39:51.013 )") 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:51.013 [2024-12-14 16:53:20.911891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.013 [2024-12-14 16:53:20.911923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:51.013 16:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:51.013 "params": { 00:39:51.013 "name": "Nvme1", 00:39:51.013 "trtype": "tcp", 00:39:51.013 "traddr": "10.0.0.2", 00:39:51.013 "adrfam": "ipv4", 00:39:51.013 "trsvcid": "4420", 00:39:51.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:51.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:51.013 "hdgst": false, 00:39:51.013 "ddgst": false 00:39:51.013 }, 00:39:51.013 "method": "bdev_nvme_attach_controller" 00:39:51.013 }' 00:39:51.013 [2024-12-14 16:53:20.923853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.013 [2024-12-14 16:53:20.923865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.013 [2024-12-14 16:53:20.935847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.013 [2024-12-14 16:53:20.935857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.013 [2024-12-14 16:53:20.947847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.013 [2024-12-14 16:53:20.947857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.013 [2024-12-14 16:53:20.950118] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:51.013 [2024-12-14 16:53:20.950159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265718 ] 00:39:51.013 [2024-12-14 16:53:20.959848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:20.959858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.014 [2024-12-14 16:53:20.971849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:20.971859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.014 [2024-12-14 16:53:20.983847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:20.983857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.014 [2024-12-14 16:53:20.995847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:20.995857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.014 [2024-12-14 16:53:21.007845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:21.007855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.014 [2024-12-14 16:53:21.019847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:21.019856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.014 [2024-12-14 16:53:21.024725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.014 [2024-12-14 16:53:21.031854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:21.031867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.014 [2024-12-14 16:53:21.043849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:21.043863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.014 [2024-12-14 16:53:21.047189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.014 [2024-12-14 16:53:21.055847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:21.055858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.014 [2024-12-14 16:53:21.067861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:21.067884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.014 [2024-12-14 16:53:21.079852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:21.079867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.014 [2024-12-14 16:53:21.091859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.014 [2024-12-14 16:53:21.091872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.103852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.103865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.115853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.115865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.127862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.127879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.139854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.139869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.151853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.151868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.163849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.163860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.175854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.175866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.187848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.187857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.199851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.199865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.211849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.211862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.223848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.223859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.235847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.235856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.247849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.247862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.259857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.259867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.271849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.271863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.283848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.283857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.295851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.295864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.307848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.307857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.319847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.319856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.331857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.331866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.343852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.343870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 [2024-12-14 16:53:21.355851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.273 [2024-12-14 16:53:21.355866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.273 Running I/O for 5 seconds... 00:39:51.533 [2024-12-14 16:53:21.371228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.371248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.385680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.385699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.399950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.399974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.411251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.411270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.425464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.425482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.440157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.440175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.455253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.455270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.467510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.467528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.481440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.481457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.496150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.496167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.511300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.511318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.524418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.524440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.539028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.539047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.552531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.552551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.567322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.567340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.580012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.580030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.592951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.592969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.533 [2024-12-14 16:53:21.607325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.533 [2024-12-14 16:53:21.607343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.621174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.621193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.635668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.635685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.648349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.648366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.660971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.660989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.675608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.675626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.688204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.688221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.703187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.703205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.717263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.717281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.731842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.731860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.742398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.742416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.757345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.757363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.771971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.771989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.782774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.782796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.797598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.797617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.812364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.812381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.827491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.827509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.841302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.841319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.855889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.855907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.792 [2024-12-14 16:53:21.866463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.792 [2024-12-14 16:53:21.866482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.051 [2024-12-14 16:53:21.880929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.051 [2024-12-14 16:53:21.880946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.051 [2024-12-14 16:53:21.895498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.051 [2024-12-14 16:53:21.895516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:21.906423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:21.906440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:21.920917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:21.920935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:21.936010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:21.936029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:21.950128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:21.950145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:21.964531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:21.964549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:21.979393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:21.979411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:21.993519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:21.993537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:22.008425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:22.008443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:22.023166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:22.023184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:22.037429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:22.037447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:22.051682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:22.051706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:22.065376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:22.065394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:22.080024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:22.080042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:22.093615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:22.093633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:22.107945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:22.107964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:22.120359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:22.120376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.052 [2024-12-14 16:53:22.133932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.052 [2024-12-14 16:53:22.133950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.148485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.148502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.164206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.164224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.177533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.177551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.192297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.192315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.207542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.207566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.219494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.219512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.233433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.233451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.247816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.247834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.260606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.260623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.275418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.275437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.289631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.289649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.303934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.303952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.316646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.316664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.329669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.329687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.344464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.344482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.360021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.360039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 16932.00 IOPS, 132.28 MiB/s [2024-12-14T15:53:22.397Z] [2024-12-14 16:53:22.374278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.374297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.311 [2024-12-14 16:53:22.388997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.311 [2024-12-14 16:53:22.389015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.404159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.404179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.415494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.415511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.429581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.429600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.444501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.444519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.460032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.460051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.472986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.473005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.484207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.484226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.497563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.497597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.512360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.512378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.527811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.527829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.541634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.541652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.556421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.556440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.572148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.572171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.587274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.587293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.601237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.601256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.616176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.616195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.631385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.631403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.570 [2024-12-14 16:53:22.645272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.570 [2024-12-14 16:53:22.645291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.829 [2024-12-14 16:53:22.659963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.829 [2024-12-14 16:53:22.659982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.829 [2024-12-14 16:53:22.670563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.829 [2024-12-14 16:53:22.670581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.685738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.685758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.700457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.700475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.716032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.716051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.727794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.727813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.741948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.741966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.756598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.756616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.771767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.771787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.784875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.784893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.799758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.799777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.811137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.811159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.826117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.826135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.840687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.840710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.855544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.855568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.866840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.866858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.881426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.881444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.896014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.896031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.830 [2024-12-14 16:53:22.907270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.830 [2024-12-14 16:53:22.907287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:22.921383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:22.921401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:22.936481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:22.936498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:22.948623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:22.948640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:22.960122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:22.960139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:22.973505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:22.973523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:22.988453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:22.988470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.004263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.004280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.017170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.017188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.031971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.031989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.044757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.044775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.060086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.060104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.071858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.071876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.085506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.085524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.100655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.100677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.115372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.115392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.129740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.129758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.144159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.144176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.159631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.159649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.089 [2024-12-14 16:53:23.173129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.089 [2024-12-14 16:53:23.173148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.187772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.187792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.199068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.199086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.213695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.213713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.228188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.228205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.243447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.243466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.257655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.257673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.272268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.272285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.287860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.287877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.300854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.300872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.315464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.315483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.326726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.326746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.340986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.341006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.355437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.355455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 16890.50 IOPS, 131.96 MiB/s [2024-12-14T15:53:23.434Z] [2024-12-14 16:53:23.370175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.370194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.384246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.384264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.400156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.400174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.415769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.415789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.348 [2024-12-14 16:53:23.430246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.348 [2024-12-14 16:53:23.430264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.444637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.444655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.459896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.459914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.473300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.473318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.488201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.488220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.504076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.504095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.516257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.516274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.529528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.529547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.544303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.544321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.557482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.557500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.572251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.572268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.587770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.587790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.600643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.600662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.613752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.613771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.628684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.628703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.643881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.643900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.657005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.657023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.607 [2024-12-14 16:53:23.671950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.607 [2024-12-14 16:53:23.671968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.608 [2024-12-14 16:53:23.683617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.608 [2024-12-14 16:53:23.683635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.697722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.697741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.712521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.712539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.727182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.727201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.740543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.740566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.755745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.755763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.768729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.768750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.783727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.783746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.797906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.797926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.812518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.812535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.823353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.823371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.837376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.837397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.852047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.852066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.864424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.864443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.877599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.877618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.892138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.892157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.908173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.908191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.923909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.923928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.936803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.936821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.867 [2024-12-14 16:53:23.951721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.867 [2024-12-14 16:53:23.951739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:23.965455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:23.965474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:23.979887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:23.979905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:23.993523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:23.993543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.008314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.008333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.023658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.023678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.038034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.038053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.052268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.052286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.067861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.067881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.080884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.080903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.095538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.095565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.109720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.109745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.124276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.124294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.139658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.139677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.153870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.153887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.168530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.168553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.183589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.183608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.126 [2024-12-14 16:53:24.198062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.126 [2024-12-14 16:53:24.198082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.212797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.212816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.227729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.227748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.238429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.238448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.253141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.253160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.267988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.268007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.278342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.278360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.292965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.292987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.307460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.307479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.321001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.321020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.336275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.336292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.351797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.351815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.364895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.364914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 16885.33 IOPS, 131.92 MiB/s [2024-12-14T15:53:24.471Z] [2024-12-14 16:53:24.379387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.379406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.393379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.393398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.408426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.408445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.423632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.423651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.436482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.436505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.449616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.449635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.385 [2024-12-14 16:53:24.464063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.385 [2024-12-14 16:53:24.464081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.474648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.474666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.489692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.489711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.504419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.504437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.519606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.519624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.531731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.531748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.545681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.545699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.560217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.560236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.575278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.575296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.588614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.588632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.603453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.603472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.616858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.616876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.631517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.631537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.645802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.645821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.660543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.660574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.675561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.675579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.689445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.689463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.704443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.704466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.645 [2024-12-14 16:53:24.719823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.645 [2024-12-14 16:53:24.719842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.731520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.731539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.745475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.745493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.760208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.760225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.775925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.775942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.789459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.789477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.803910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.803928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.814614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.814632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.829228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.829246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.843805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.843823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.857846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.857864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.872532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.872550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.887772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.887792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.900869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.900887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.915991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.916009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.927007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.927025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.941797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.941826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.956140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.956158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.967392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.967411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.904 [2024-12-14 16:53:24.981458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.904 [2024-12-14 16:53:24.981477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.163 [2024-12-14 16:53:24.996495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.163 [2024-12-14 16:53:24.996514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.163 [2024-12-14 16:53:25.011899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.163 [2024-12-14 16:53:25.011917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.163 [2024-12-14 16:53:25.024416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.163 [2024-12-14 16:53:25.024434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.163 [2024-12-14 16:53:25.037644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.163 [2024-12-14 16:53:25.037662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.163 [2024-12-14 16:53:25.052472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.163 [2024-12-14 16:53:25.052490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.163 [2024-12-14 16:53:25.067980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.163 [2024-12-14 16:53:25.067998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.163 [2024-12-14 16:53:25.078188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.163 [2024-12-14 16:53:25.078206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.163 [2024-12-14 16:53:25.093098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.163 [2024-12-14 16:53:25.093116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.163 [2024-12-14 16:53:25.107710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.163 [2024-12-14 16:53:25.107729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.163 [2024-12-14 16:53:25.120513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.163 [2024-12-14 16:53:25.120532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.163 [2024-12-14 16:53:25.133389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.163 [2024-12-14 16:53:25.133407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.164 [2024-12-14 16:53:25.143499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.164 [2024-12-14 16:53:25.143518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.164 [2024-12-14 16:53:25.157953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.164 [2024-12-14 16:53:25.157972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.164 [2024-12-14 16:53:25.172032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.164 [2024-12-14 16:53:25.172051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.164 [2024-12-14 16:53:25.184607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.164 [2024-12-14 16:53:25.184627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.164 [2024-12-14 16:53:25.197735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.164 [2024-12-14 16:53:25.197753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.164 [2024-12-14 16:53:25.212067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.164 [2024-12-14 16:53:25.212085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.164 [2024-12-14 16:53:25.224671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.164 [2024-12-14 16:53:25.224689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.164 [2024-12-14 16:53:25.240026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.164 [2024-12-14 16:53:25.240045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.252663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.252681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.265061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.265081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.277250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.277269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.292052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.292070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.304899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.304919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.319771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.319791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.331060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.331079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.345852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.345871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.360282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.360300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 16889.75 IOPS, 131.95 MiB/s [2024-12-14T15:53:25.508Z] [2024-12-14 16:53:25.375560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.375580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.390289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.390310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.405262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.405280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.419980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.419999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.430799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.430817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.445829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.445847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.460822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.460842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.475924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.475948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.489695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.489723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.422 [2024-12-14 16:53:25.504207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.422 [2024-12-14 16:53:25.504225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.519398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.519418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.533142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.533161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.543553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.543576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.557470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.557488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.572509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.572528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.587971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.587990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.598545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.598569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.613823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.613841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.628208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.628227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.643768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.643789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.658071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.658090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.672699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.672718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.687258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.687277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.701947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.701966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.716397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.716415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.731788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.731806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.745829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.745852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.681 [2024-12-14 16:53:25.760328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.681 [2024-12-14 16:53:25.760346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.776317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.776334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.791689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.791708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.804336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.804354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.819625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.819644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.831967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.831985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.846126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.846143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.860932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.860951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.875680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.875698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.888616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.888634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.900673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.900690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.913930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.913948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.928983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.929001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.943820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.943838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.955190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.955208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.970133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.940 [2024-12-14 16:53:25.970152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.940 [2024-12-14 16:53:25.985276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.941 [2024-12-14 16:53:25.985294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.941 [2024-12-14 16:53:25.999601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.941 [2024-12-14 16:53:25.999620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.941 [2024-12-14 16:53:26.012142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.941 [2024-12-14 16:53:26.012166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.941 [2024-12-14 16:53:26.025439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.941 [2024-12-14 16:53:26.025458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.040287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.040305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.055536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.055555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.069179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.069198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.084050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.084069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.096357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.096375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.109385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.109404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.119762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.119780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.133371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.133389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.147902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.147920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.159909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.159927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.174084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.174102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.189048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.189066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.203689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.203707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.215944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.215964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.229759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.229778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.244344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.244362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.259583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.259602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.200 [2024-12-14 16:53:26.273615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.200 [2024-12-14 16:53:26.273640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.288750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.288769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.303359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.303377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.317722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.317741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.332351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.332371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.348287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.348307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.364035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.364055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 16862.60 IOPS, 131.74 MiB/s [2024-12-14T15:53:26.545Z] [2024-12-14 16:53:26.375764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.375783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 00:39:56.459 Latency(us) 00:39:56.459 [2024-12-14T15:53:26.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:56.459 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:56.459 Nvme1n1 : 5.01 16862.85 131.74 0.00 0.00 7582.64 1872.46 12607.88 00:39:56.459 [2024-12-14T15:53:26.545Z] =================================================================================================================== 00:39:56.459 [2024-12-14T15:53:26.545Z] Total : 16862.85 131.74 0.00 0.00 7582.64 1872.46 12607.88 00:39:56.459 [2024-12-14 16:53:26.383853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.383870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.395854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.395868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.407863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.407880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.419856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.419872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.431862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.431876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.443852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.443865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.455861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.455874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.467852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.467866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.479851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.479863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.491846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.459 [2024-12-14 16:53:26.491856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.459 [2024-12-14 16:53:26.503851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.460 [2024-12-14 16:53:26.503862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.460 [2024-12-14 16:53:26.515858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.460 [2024-12-14 16:53:26.515868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.460 [2024-12-14 16:53:26.527849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:56.460 [2024-12-14 16:53:26.527859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:56.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1265718) - No such process 00:39:56.460 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1265718 00:39:56.460 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:56.460 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.460 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.718 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.719 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:56.719 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.719 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.719 delay0 00:39:56.719 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.719 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:56.719 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.719 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.719 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.719 16:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:56.719 [2024-12-14 16:53:26.713702] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:04.837 Initializing NVMe Controllers 00:40:04.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:04.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:04.837 Initialization complete. Launching workers. 00:40:04.837 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 290, failed: 12191 00:40:04.837 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12412, failed to submit 69 00:40:04.837 success 12283, unsuccessful 129, failed 0 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:04.837 rmmod nvme_tcp 00:40:04.837 rmmod nvme_fabrics 00:40:04.837 rmmod nvme_keyring 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1264124 ']' 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1264124 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1264124 ']' 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1264124 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1264124 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1264124' 00:40:04.837 killing process with pid 1264124 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1264124 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1264124 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:04.837 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:04.838 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:04.838 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:04.838 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:04.838 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:04.838 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:04.838 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.838 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:04.838 16:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:06.216 16:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:06.216 00:40:06.216 real 0m32.120s 00:40:06.216 user 0m41.571s 00:40:06.216 sys 0m12.811s 00:40:06.216 16:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:06.216 16:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:06.216 ************************************ 00:40:06.216 END TEST nvmf_zcopy 00:40:06.216 ************************************ 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:06.216 ************************************ 00:40:06.216 START TEST nvmf_nmic 00:40:06.216 ************************************ 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:06.216 * Looking for test storage... 00:40:06.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:06.216 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:06.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:06.217 --rc genhtml_branch_coverage=1 00:40:06.217 --rc genhtml_function_coverage=1 00:40:06.217 --rc genhtml_legend=1 00:40:06.217 --rc geninfo_all_blocks=1 00:40:06.217 --rc geninfo_unexecuted_blocks=1 00:40:06.217 00:40:06.217 ' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:06.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:06.217 --rc genhtml_branch_coverage=1 00:40:06.217 --rc genhtml_function_coverage=1 00:40:06.217 --rc genhtml_legend=1 00:40:06.217 --rc geninfo_all_blocks=1 00:40:06.217 --rc geninfo_unexecuted_blocks=1 00:40:06.217 00:40:06.217 ' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:06.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:06.217 --rc genhtml_branch_coverage=1 00:40:06.217 --rc genhtml_function_coverage=1 00:40:06.217 --rc genhtml_legend=1 00:40:06.217 --rc geninfo_all_blocks=1 00:40:06.217 --rc geninfo_unexecuted_blocks=1 00:40:06.217 00:40:06.217 ' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:06.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:06.217 --rc genhtml_branch_coverage=1 00:40:06.217 --rc genhtml_function_coverage=1 00:40:06.217 --rc genhtml_legend=1 00:40:06.217 --rc geninfo_all_blocks=1 00:40:06.217 --rc geninfo_unexecuted_blocks=1 00:40:06.217 00:40:06.217 ' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:06.217 16:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:12.787 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:12.787 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:12.787 Found net devices under 0000:af:00.0: cvl_0_0 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:12.787 Found net devices under 0000:af:00.1: cvl_0_1 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:12.787 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:12.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:12.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:40:12.787 00:40:12.787 --- 10.0.0.2 ping statistics --- 00:40:12.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:12.787 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:12.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:12.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:40:12.787 00:40:12.787 --- 10.0.0.1 ping statistics --- 00:40:12.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:12.787 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1271180 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1271180 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1271180 ']' 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:12.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:12.787 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.787 [2024-12-14 16:53:42.161625] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:12.787 [2024-12-14 16:53:42.162527] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:12.787 [2024-12-14 16:53:42.162564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:12.788 [2024-12-14 16:53:42.242542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:12.788 [2024-12-14 16:53:42.266057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:12.788 [2024-12-14 16:53:42.266097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:12.788 [2024-12-14 16:53:42.266103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:12.788 [2024-12-14 16:53:42.266109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:12.788 [2024-12-14 16:53:42.266114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:12.788 [2024-12-14 16:53:42.267531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:12.788 [2024-12-14 16:53:42.267654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:12.788 [2024-12-14 16:53:42.267687] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.788 [2024-12-14 16:53:42.267688] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:12.788 [2024-12-14 16:53:42.330214] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:12.788 [2024-12-14 16:53:42.331066] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:12.788 [2024-12-14 16:53:42.331277] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:12.788 [2024-12-14 16:53:42.331812] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:12.788 [2024-12-14 16:53:42.331837] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.788 [2024-12-14 16:53:42.396511] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.788 Malloc0 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.788 [2024-12-14 16:53:42.484573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:12.788 test case1: single bdev can't be used in multiple subsystems 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.788 [2024-12-14 16:53:42.512209] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:12.788 [2024-12-14 16:53:42.512233] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:12.788 [2024-12-14 16:53:42.512241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.788 request: 00:40:12.788 { 00:40:12.788 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:12.788 "namespace": { 00:40:12.788 "bdev_name": "Malloc0", 00:40:12.788 "no_auto_visible": false, 00:40:12.788 "hide_metadata": false 00:40:12.788 }, 00:40:12.788 "method": "nvmf_subsystem_add_ns", 00:40:12.788 "req_id": 1 00:40:12.788 } 00:40:12.788 Got JSON-RPC error response 00:40:12.788 response: 00:40:12.788 { 00:40:12.788 "code": -32602, 00:40:12.788 "message": "Invalid parameters" 00:40:12.788 } 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:12.788 Adding namespace failed - expected result. 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:12.788 test case2: host connect to nvmf target in multiple paths 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:12.788 [2024-12-14 16:53:42.524286] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:12.788 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:13.046 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:13.046 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:13.046 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:13.046 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:13.046 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:14.945 16:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:14.945 16:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:14.945 16:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:14.945 16:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:14.945 16:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:14.945 16:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:14.945 16:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:14.945 [global] 00:40:14.945 thread=1 00:40:14.945 invalidate=1 00:40:14.945 rw=write 00:40:14.945 time_based=1 00:40:14.945 runtime=1 00:40:14.945 ioengine=libaio 00:40:14.945 direct=1 00:40:14.945 bs=4096 00:40:14.945 iodepth=1 00:40:14.945 norandommap=0 00:40:14.945 numjobs=1 00:40:14.945 00:40:14.945 verify_dump=1 00:40:14.945 verify_backlog=512 00:40:14.945 verify_state_save=0 00:40:14.945 do_verify=1 00:40:14.945 verify=crc32c-intel 00:40:14.945 [job0] 00:40:14.945 filename=/dev/nvme0n1 00:40:14.945 Could not set queue depth (nvme0n1) 00:40:15.203 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:15.203 fio-3.35 00:40:15.203 Starting 1 thread 00:40:16.578 00:40:16.578 job0: (groupid=0, jobs=1): err= 0: pid=1271781: Sat Dec 14 16:53:46 2024 00:40:16.578 read: IOPS=23, BW=93.8KiB/s (96.0kB/s)(96.0KiB/1024msec) 00:40:16.578 slat (nsec): min=10546, max=26334, avg=22340.71, stdev=2670.07 00:40:16.578 clat (usec): min=644, max=41410, avg=39301.58, stdev=8234.61 00:40:16.578 lat (usec): min=670, max=41433, avg=39323.92, stdev=8233.76 00:40:16.578 clat percentiles (usec): 00:40:16.578 | 1.00th=[ 644], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:16.578 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:16.578 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:16.578 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:16.578 | 99.99th=[41157] 00:40:16.578 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:40:16.578 slat (nsec): min=10899, max=39892, avg=11950.51, stdev=1830.21 00:40:16.578 clat (usec): min=130, max=322, avg=141.74, stdev=11.08 00:40:16.578 lat (usec): min=141, max=362, avg=153.69, stdev=12.10 00:40:16.578 clat percentiles (usec): 00:40:16.578 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:40:16.578 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 141], 60.00th=[ 141], 00:40:16.578 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 147], 95.00th=[ 149], 00:40:16.578 | 99.00th=[ 176], 99.50th=[ 217], 99.90th=[ 322], 99.95th=[ 322], 00:40:16.578 | 99.99th=[ 322] 00:40:16.578 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:40:16.578 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:16.578 lat (usec) : 250=95.34%, 500=0.19%, 750=0.19% 00:40:16.578 lat (msec) : 50=4.29% 00:40:16.578 cpu : usr=0.59%, sys=0.29%, ctx=536, majf=0, minf=1 00:40:16.578 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:16.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:16.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:16.578 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:16.578 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:16.578 00:40:16.578 Run status group 0 (all jobs): 00:40:16.578 READ: bw=93.8KiB/s (96.0kB/s), 93.8KiB/s-93.8KiB/s (96.0kB/s-96.0kB/s), io=96.0KiB (98.3kB), run=1024-1024msec 00:40:16.578 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=2048KiB (2097kB), run=1024-1024msec 00:40:16.578 00:40:16.578 Disk stats (read/write): 00:40:16.578 nvme0n1: ios=70/512, merge=0/0, ticks=886/78, in_queue=964, util=95.49% 00:40:16.578 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:16.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:16.578 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:16.578 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:16.578 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:16.578 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:16.578 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:16.578 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:16.578 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:16.578 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:16.578 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:16.837 rmmod nvme_tcp 00:40:16.837 rmmod nvme_fabrics 00:40:16.837 rmmod nvme_keyring 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1271180 ']' 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1271180 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1271180 ']' 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1271180 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1271180 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1271180' 00:40:16.837 killing process with pid 1271180 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1271180 00:40:16.837 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1271180 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:17.096 16:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:19.010 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:19.010 00:40:19.010 real 0m12.996s 00:40:19.010 user 0m24.200s 00:40:19.010 sys 0m5.912s 00:40:19.010 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:19.010 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:19.010 ************************************ 00:40:19.010 END TEST nvmf_nmic 00:40:19.010 ************************************ 00:40:19.010 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:19.010 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:19.010 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:19.010 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:19.270 ************************************ 00:40:19.270 START TEST nvmf_fio_target 00:40:19.270 ************************************ 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:19.270 * Looking for test storage... 00:40:19.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:19.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:19.270 --rc genhtml_branch_coverage=1 00:40:19.270 --rc genhtml_function_coverage=1 00:40:19.270 --rc genhtml_legend=1 00:40:19.270 --rc geninfo_all_blocks=1 00:40:19.270 --rc geninfo_unexecuted_blocks=1 00:40:19.270 00:40:19.270 ' 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:19.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:19.270 --rc genhtml_branch_coverage=1 00:40:19.270 --rc genhtml_function_coverage=1 00:40:19.270 --rc genhtml_legend=1 00:40:19.270 --rc geninfo_all_blocks=1 00:40:19.270 --rc geninfo_unexecuted_blocks=1 00:40:19.270 00:40:19.270 ' 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:19.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:19.270 --rc genhtml_branch_coverage=1 00:40:19.270 --rc genhtml_function_coverage=1 00:40:19.270 --rc genhtml_legend=1 00:40:19.270 --rc geninfo_all_blocks=1 00:40:19.270 --rc geninfo_unexecuted_blocks=1 00:40:19.270 00:40:19.270 ' 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:19.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:19.270 --rc genhtml_branch_coverage=1 00:40:19.270 --rc genhtml_function_coverage=1 00:40:19.270 --rc genhtml_legend=1 00:40:19.270 --rc geninfo_all_blocks=1 00:40:19.270 --rc geninfo_unexecuted_blocks=1 00:40:19.270 00:40:19.270 ' 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:19.270 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:19.271 16:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:25.860 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:25.860 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:25.860 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:25.861 Found net devices under 0000:af:00.0: cvl_0_0 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:25.861 Found net devices under 0000:af:00.1: cvl_0_1 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:25.861 16:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:25.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:25.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:40:25.861 00:40:25.861 --- 10.0.0.2 ping statistics --- 00:40:25.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:25.861 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:25.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:25.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:40:25.861 00:40:25.861 --- 10.0.0.1 ping statistics --- 00:40:25.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:25.861 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1275465 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1275465 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1275465 ']' 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:25.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:25.861 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:25.861 [2024-12-14 16:53:55.181965] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:25.861 [2024-12-14 16:53:55.182959] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:25.861 [2024-12-14 16:53:55.182998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:25.861 [2024-12-14 16:53:55.263807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:25.861 [2024-12-14 16:53:55.286798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:25.861 [2024-12-14 16:53:55.286838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:25.861 [2024-12-14 16:53:55.286845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:25.861 [2024-12-14 16:53:55.286851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:25.861 [2024-12-14 16:53:55.286856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:25.861 [2024-12-14 16:53:55.288168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:25.861 [2024-12-14 16:53:55.288277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:25.861 [2024-12-14 16:53:55.288361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:25.861 [2024-12-14 16:53:55.288360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:25.861 [2024-12-14 16:53:55.352341] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:25.861 [2024-12-14 16:53:55.353372] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:25.862 [2024-12-14 16:53:55.353468] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:25.862 [2024-12-14 16:53:55.353809] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:25.862 [2024-12-14 16:53:55.353869] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:25.862 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:25.862 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:25.862 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:25.862 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:25.862 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:25.862 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:25.862 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:25.862 [2024-12-14 16:53:55.597176] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:25.862 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:25.862 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:25.862 16:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:26.121 16:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:26.121 16:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:26.380 16:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:26.380 16:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:26.639 16:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:26.639 16:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:26.639 16:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:26.898 16:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:26.898 16:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:27.157 16:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:27.157 16:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:27.416 16:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:27.416 16:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:27.416 16:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:27.675 16:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:27.675 16:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:27.952 16:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:27.952 16:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:28.210 16:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:28.210 [2024-12-14 16:53:58.229092] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:28.210 16:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:28.469 16:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:28.728 16:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:28.986 16:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:28.986 16:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:28.986 16:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:28.986 16:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:28.986 16:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:28.986 16:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:30.891 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:30.891 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:30.891 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:30.891 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:30.891 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:30.891 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:30.891 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:30.891 [global] 00:40:30.891 thread=1 00:40:30.891 invalidate=1 00:40:30.891 rw=write 00:40:30.891 time_based=1 00:40:30.891 runtime=1 00:40:30.891 ioengine=libaio 00:40:30.891 direct=1 00:40:30.891 bs=4096 00:40:30.891 iodepth=1 00:40:30.891 norandommap=0 00:40:30.891 numjobs=1 00:40:30.891 00:40:31.176 verify_dump=1 00:40:31.176 verify_backlog=512 00:40:31.176 verify_state_save=0 00:40:31.176 do_verify=1 00:40:31.176 verify=crc32c-intel 00:40:31.176 [job0] 00:40:31.176 filename=/dev/nvme0n1 00:40:31.176 [job1] 00:40:31.176 filename=/dev/nvme0n2 00:40:31.176 [job2] 00:40:31.176 filename=/dev/nvme0n3 00:40:31.176 [job3] 00:40:31.176 filename=/dev/nvme0n4 00:40:31.176 Could not set queue depth (nvme0n1) 00:40:31.176 Could not set queue depth (nvme0n2) 00:40:31.176 Could not set queue depth (nvme0n3) 00:40:31.176 Could not set queue depth (nvme0n4) 00:40:31.438 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:31.438 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:31.438 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:31.438 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:31.438 fio-3.35 00:40:31.438 Starting 4 threads 00:40:32.810 00:40:32.810 job0: (groupid=0, jobs=1): err= 0: pid=1276644: Sat Dec 14 16:54:02 2024 00:40:32.810 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:40:32.810 slat (nsec): min=10336, max=24245, avg=22429.86, stdev=2789.09 00:40:32.810 clat (usec): min=40878, max=41959, avg=41026.93, stdev=214.19 00:40:32.810 lat (usec): min=40900, max=41982, avg=41049.36, stdev=213.92 00:40:32.810 clat percentiles (usec): 00:40:32.810 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:32.810 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:32.810 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:32.810 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:32.810 | 99.99th=[42206] 00:40:32.810 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:40:32.810 slat (nsec): min=10087, max=37099, avg=11736.47, stdev=2439.44 00:40:32.810 clat (usec): min=135, max=316, avg=213.16, stdev=39.08 00:40:32.810 lat (usec): min=145, max=353, avg=224.90, stdev=39.46 00:40:32.810 clat percentiles (usec): 00:40:32.810 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 161], 00:40:32.810 | 30.00th=[ 178], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 241], 00:40:32.810 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:40:32.810 | 99.00th=[ 255], 99.50th=[ 260], 99.90th=[ 318], 99.95th=[ 318], 00:40:32.810 | 99.99th=[ 318] 00:40:32.810 bw ( KiB/s): min= 4096, max= 4096, per=34.73%, avg=4096.00, stdev= 0.00, samples=1 00:40:32.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:32.810 lat (usec) : 250=92.88%, 500=3.00% 00:40:32.810 lat (msec) : 50=4.12% 00:40:32.810 cpu : usr=0.20%, sys=0.69%, ctx=537, majf=0, minf=1 00:40:32.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:32.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.810 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:32.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:32.810 job1: (groupid=0, jobs=1): err= 0: pid=1276645: Sat Dec 14 16:54:02 2024 00:40:32.810 read: IOPS=24, BW=99.8KiB/s (102kB/s)(104KiB/1042msec) 00:40:32.810 slat (nsec): min=8733, max=24334, avg=20820.77, stdev=5100.20 00:40:32.810 clat (usec): min=274, max=41947, avg=36261.58, stdev=13252.12 00:40:32.810 lat (usec): min=298, max=41969, avg=36282.40, stdev=13251.02 00:40:32.810 clat percentiles (usec): 00:40:32.810 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 302], 20.00th=[40633], 00:40:32.810 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:32.810 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:32.810 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:32.810 | 99.99th=[42206] 00:40:32.810 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:40:32.810 slat (nsec): min=9619, max=46281, avg=11026.46, stdev=2072.19 00:40:32.810 clat (usec): min=149, max=346, avg=178.55, stdev=17.36 00:40:32.810 lat (usec): min=159, max=384, avg=189.57, stdev=18.11 00:40:32.810 clat percentiles (usec): 00:40:32.810 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:40:32.810 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:40:32.810 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:40:32.810 | 99.00th=[ 235], 99.50th=[ 265], 99.90th=[ 347], 99.95th=[ 347], 00:40:32.810 | 99.99th=[ 347] 00:40:32.810 bw ( KiB/s): min= 4096, max= 4096, per=34.73%, avg=4096.00, stdev= 0.00, samples=1 00:40:32.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:32.810 lat (usec) : 250=94.42%, 500=1.30% 00:40:32.810 lat (msec) : 50=4.28% 00:40:32.810 cpu : usr=0.10%, sys=0.67%, ctx=539, majf=0, minf=1 00:40:32.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:32.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.810 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:32.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:32.810 job2: (groupid=0, jobs=1): err= 0: pid=1276652: Sat Dec 14 16:54:02 2024 00:40:32.810 read: IOPS=40, BW=164KiB/s (167kB/s)(164KiB/1003msec) 00:40:32.810 slat (nsec): min=8670, max=27310, avg=16942.22, stdev=6949.55 00:40:32.810 clat (usec): min=200, max=42379, avg=22051.12, stdev=20538.74 00:40:32.810 lat (usec): min=211, max=42387, avg=22068.07, stdev=20537.69 00:40:32.810 clat percentiles (usec): 00:40:32.810 | 1.00th=[ 202], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 227], 00:40:32.810 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[40633], 60.00th=[40633], 00:40:32.810 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:32.810 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:32.810 | 99.99th=[42206] 00:40:32.810 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:40:32.810 slat (nsec): min=6535, max=39022, avg=11323.58, stdev=2517.26 00:40:32.810 clat (usec): min=143, max=285, avg=176.97, stdev=14.34 00:40:32.810 lat (usec): min=150, max=310, avg=188.30, stdev=15.02 00:40:32.810 clat percentiles (usec): 00:40:32.810 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:40:32.810 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:40:32.810 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:40:32.810 | 99.00th=[ 223], 99.50th=[ 239], 99.90th=[ 285], 99.95th=[ 285], 00:40:32.810 | 99.99th=[ 285] 00:40:32.810 bw ( KiB/s): min= 4096, max= 4096, per=34.73%, avg=4096.00, stdev= 0.00, samples=1 00:40:32.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:32.810 lat (usec) : 250=95.66%, 500=0.36% 00:40:32.810 lat (msec) : 50=3.98% 00:40:32.810 cpu : usr=0.50%, sys=0.80%, ctx=553, majf=0, minf=1 00:40:32.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:32.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.810 issued rwts: total=41,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:32.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:32.810 job3: (groupid=0, jobs=1): err= 0: pid=1276653: Sat Dec 14 16:54:02 2024 00:40:32.810 read: IOPS=1039, BW=4159KiB/s (4259kB/s)(4188KiB/1007msec) 00:40:32.810 slat (nsec): min=6833, max=23014, avg=7823.19, stdev=1765.83 00:40:32.810 clat (usec): min=196, max=41743, avg=696.41, stdev=4323.64 00:40:32.810 lat (usec): min=203, max=41753, avg=704.23, stdev=4323.97 00:40:32.810 clat percentiles (usec): 00:40:32.810 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 202], 20.00th=[ 206], 00:40:32.810 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 233], 60.00th=[ 243], 00:40:32.810 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 258], 00:40:32.810 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:40:32.810 | 99.99th=[41681] 00:40:32.810 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:40:32.810 slat (nsec): min=9657, max=42563, avg=10785.50, stdev=1607.21 00:40:32.810 clat (usec): min=123, max=315, avg=160.70, stdev=31.21 00:40:32.810 lat (usec): min=134, max=325, avg=171.49, stdev=31.32 00:40:32.810 clat percentiles (usec): 00:40:32.810 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:40:32.810 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 161], 00:40:32.810 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 202], 95.00th=[ 243], 00:40:32.810 | 99.00th=[ 245], 99.50th=[ 245], 99.90th=[ 260], 99.95th=[ 314], 00:40:32.810 | 99.99th=[ 314] 00:40:32.810 bw ( KiB/s): min= 656, max=11632, per=52.10%, avg=6144.00, stdev=7761.20, samples=2 00:40:32.810 iops : min= 164, max= 2908, avg=1536.00, stdev=1940.30, samples=2 00:40:32.810 lat (usec) : 250=94.35%, 500=5.07% 00:40:32.810 lat (msec) : 2=0.12%, 50=0.46% 00:40:32.810 cpu : usr=1.09%, sys=2.58%, ctx=2583, majf=0, minf=1 00:40:32.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:32.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:32.810 issued rwts: total=1047,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:32.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:32.810 00:40:32.810 Run status group 0 (all jobs): 00:40:32.810 READ: bw=4361KiB/s (4466kB/s), 86.3KiB/s-4159KiB/s (88.3kB/s-4259kB/s), io=4544KiB (4653kB), run=1003-1042msec 00:40:32.810 WRITE: bw=11.5MiB/s (12.1MB/s), 1965KiB/s-6101KiB/s (2013kB/s-6248kB/s), io=12.0MiB (12.6MB), run=1003-1042msec 00:40:32.810 00:40:32.810 Disk stats (read/write): 00:40:32.810 nvme0n1: ios=41/512, merge=0/0, ticks=1559/104, in_queue=1663, util=85.67% 00:40:32.810 nvme0n2: ios=71/512, merge=0/0, ticks=802/87, in_queue=889, util=90.84% 00:40:32.810 nvme0n3: ios=88/512, merge=0/0, ticks=811/86, in_queue=897, util=94.79% 00:40:32.810 nvme0n4: ios=1097/1536, merge=0/0, ticks=637/232, in_queue=869, util=95.37% 00:40:32.810 16:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:32.810 [global] 00:40:32.810 thread=1 00:40:32.810 invalidate=1 00:40:32.810 rw=randwrite 00:40:32.810 time_based=1 00:40:32.810 runtime=1 00:40:32.810 ioengine=libaio 00:40:32.810 direct=1 00:40:32.810 bs=4096 00:40:32.810 iodepth=1 00:40:32.810 norandommap=0 00:40:32.810 numjobs=1 00:40:32.810 00:40:32.810 verify_dump=1 00:40:32.810 verify_backlog=512 00:40:32.810 verify_state_save=0 00:40:32.810 do_verify=1 00:40:32.810 verify=crc32c-intel 00:40:32.810 [job0] 00:40:32.810 filename=/dev/nvme0n1 00:40:32.811 [job1] 00:40:32.811 filename=/dev/nvme0n2 00:40:32.811 [job2] 00:40:32.811 filename=/dev/nvme0n3 00:40:32.811 [job3] 00:40:32.811 filename=/dev/nvme0n4 00:40:32.811 Could not set queue depth (nvme0n1) 00:40:32.811 Could not set queue depth (nvme0n2) 00:40:32.811 Could not set queue depth (nvme0n3) 00:40:32.811 Could not set queue depth (nvme0n4) 00:40:33.068 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.068 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.068 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.068 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:33.068 fio-3.35 00:40:33.068 Starting 4 threads 00:40:34.441 00:40:34.441 job0: (groupid=0, jobs=1): err= 0: pid=1277053: Sat Dec 14 16:54:04 2024 00:40:34.441 read: IOPS=1049, BW=4200KiB/s (4301kB/s)(4372KiB/1041msec) 00:40:34.441 slat (nsec): min=7434, max=38628, avg=8493.44, stdev=1494.10 00:40:34.441 clat (usec): min=168, max=41109, avg=684.07, stdev=4245.72 00:40:34.441 lat (usec): min=176, max=41125, avg=692.56, stdev=4246.59 00:40:34.441 clat percentiles (usec): 00:40:34.441 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 215], 00:40:34.441 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:40:34.441 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 281], 00:40:34.441 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:34.441 | 99.99th=[41157] 00:40:34.441 write: IOPS=1475, BW=5902KiB/s (6044kB/s)(6144KiB/1041msec); 0 zone resets 00:40:34.441 slat (nsec): min=10899, max=36361, avg=12389.36, stdev=1863.39 00:40:34.441 clat (usec): min=121, max=491, avg=166.90, stdev=35.50 00:40:34.441 lat (usec): min=133, max=502, avg=179.29, stdev=35.77 00:40:34.441 clat percentiles (usec): 00:40:34.441 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:40:34.441 | 30.00th=[ 147], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 165], 00:40:34.441 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 237], 95.00th=[ 241], 00:40:34.441 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 465], 99.95th=[ 490], 00:40:34.441 | 99.99th=[ 490] 00:40:34.441 bw ( KiB/s): min= 1800, max=10488, per=31.23%, avg=6144.00, stdev=6143.34, samples=2 00:40:34.441 iops : min= 450, max= 2622, avg=1536.00, stdev=1535.84, samples=2 00:40:34.441 lat (usec) : 250=89.84%, 500=9.66%, 750=0.04% 00:40:34.442 lat (msec) : 50=0.46% 00:40:34.442 cpu : usr=2.21%, sys=4.04%, ctx=2631, majf=0, minf=1 00:40:34.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:34.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.442 issued rwts: total=1093,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:34.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:34.442 job1: (groupid=0, jobs=1): err= 0: pid=1277055: Sat Dec 14 16:54:04 2024 00:40:34.442 read: IOPS=2123, BW=8496KiB/s (8699kB/s)(8504KiB/1001msec) 00:40:34.442 slat (nsec): min=7153, max=45585, avg=8453.96, stdev=2277.89 00:40:34.442 clat (usec): min=166, max=307, avg=226.88, stdev=30.37 00:40:34.442 lat (usec): min=174, max=318, avg=235.34, stdev=30.77 00:40:34.442 clat percentiles (usec): 00:40:34.442 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 188], 00:40:34.442 | 30.00th=[ 198], 40.00th=[ 229], 50.00th=[ 241], 60.00th=[ 245], 00:40:34.442 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 265], 00:40:34.442 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 297], 99.95th=[ 297], 00:40:34.442 | 99.99th=[ 310] 00:40:34.442 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:34.442 slat (nsec): min=10050, max=63010, avg=12271.66, stdev=4041.29 00:40:34.442 clat (usec): min=120, max=661, avg=177.20, stdev=46.88 00:40:34.442 lat (usec): min=131, max=675, avg=189.47, stdev=47.62 00:40:34.442 clat percentiles (usec): 00:40:34.442 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 135], 00:40:34.442 | 30.00th=[ 139], 40.00th=[ 159], 50.00th=[ 174], 60.00th=[ 180], 00:40:34.442 | 70.00th=[ 184], 80.00th=[ 225], 90.00th=[ 247], 95.00th=[ 262], 00:40:34.442 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 619], 99.95th=[ 652], 00:40:34.442 | 99.99th=[ 660] 00:40:34.442 bw ( KiB/s): min= 9208, max= 9208, per=46.80%, avg=9208.00, stdev= 0.00, samples=1 00:40:34.442 iops : min= 2302, max= 2302, avg=2302.00, stdev= 0.00, samples=1 00:40:34.442 lat (usec) : 250=84.44%, 500=15.47%, 750=0.09% 00:40:34.442 cpu : usr=4.50%, sys=7.00%, ctx=4687, majf=0, minf=2 00:40:34.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:34.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.442 issued rwts: total=2126,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:34.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:34.442 job2: (groupid=0, jobs=1): err= 0: pid=1277057: Sat Dec 14 16:54:04 2024 00:40:34.442 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:40:34.442 slat (nsec): min=7283, max=27986, avg=23878.91, stdev=3878.41 00:40:34.442 clat (usec): min=40793, max=41091, avg=40958.26, stdev=64.49 00:40:34.442 lat (usec): min=40800, max=41113, avg=40982.14, stdev=66.50 00:40:34.442 clat percentiles (usec): 00:40:34.442 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:34.442 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:34.442 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:34.442 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:34.442 | 99.99th=[41157] 00:40:34.442 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:40:34.442 slat (nsec): min=7391, max=42918, avg=12180.52, stdev=3777.07 00:40:34.442 clat (usec): min=138, max=657, avg=216.36, stdev=46.12 00:40:34.442 lat (usec): min=146, max=670, avg=228.54, stdev=47.48 00:40:34.442 clat percentiles (usec): 00:40:34.442 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 186], 00:40:34.442 | 30.00th=[ 194], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 223], 00:40:34.442 | 70.00th=[ 231], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 273], 00:40:34.442 | 99.00th=[ 302], 99.50th=[ 537], 99.90th=[ 660], 99.95th=[ 660], 00:40:34.442 | 99.99th=[ 660] 00:40:34.442 bw ( KiB/s): min= 4096, max= 4096, per=20.82%, avg=4096.00, stdev= 0.00, samples=1 00:40:34.442 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:34.442 lat (usec) : 250=83.52%, 500=11.61%, 750=0.75% 00:40:34.442 lat (msec) : 50=4.12% 00:40:34.442 cpu : usr=0.29%, sys=0.98%, ctx=536, majf=0, minf=1 00:40:34.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:34.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.442 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:34.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:34.442 job3: (groupid=0, jobs=1): err= 0: pid=1277058: Sat Dec 14 16:54:04 2024 00:40:34.442 read: IOPS=22, BW=90.9KiB/s (93.1kB/s)(92.0KiB/1012msec) 00:40:34.442 slat (nsec): min=9670, max=24835, avg=22838.70, stdev=3961.26 00:40:34.442 clat (usec): min=270, max=41050, avg=39193.16, stdev=8484.89 00:40:34.442 lat (usec): min=281, max=41074, avg=39216.00, stdev=8487.47 00:40:34.442 clat percentiles (usec): 00:40:34.442 | 1.00th=[ 273], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:34.442 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:34.442 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:34.442 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:34.442 | 99.99th=[41157] 00:40:34.442 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:40:34.442 slat (nsec): min=11643, max=37597, avg=13692.31, stdev=3279.03 00:40:34.442 clat (usec): min=158, max=362, avg=196.81, stdev=29.12 00:40:34.442 lat (usec): min=170, max=399, avg=210.50, stdev=30.42 00:40:34.442 clat percentiles (usec): 00:40:34.442 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:40:34.442 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:40:34.442 | 70.00th=[ 210], 80.00th=[ 235], 90.00th=[ 239], 95.00th=[ 241], 00:40:34.442 | 99.00th=[ 255], 99.50th=[ 281], 99.90th=[ 363], 99.95th=[ 363], 00:40:34.442 | 99.99th=[ 363] 00:40:34.442 bw ( KiB/s): min= 4096, max= 4096, per=20.82%, avg=4096.00, stdev= 0.00, samples=1 00:40:34.442 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:34.442 lat (usec) : 250=93.46%, 500=2.43% 00:40:34.442 lat (msec) : 50=4.11% 00:40:34.442 cpu : usr=0.30%, sys=0.69%, ctx=537, majf=0, minf=1 00:40:34.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:34.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:34.442 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:34.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:34.442 00:40:34.442 Run status group 0 (all jobs): 00:40:34.442 READ: bw=12.2MiB/s (12.8MB/s), 86.2KiB/s-8496KiB/s (88.3kB/s-8699kB/s), io=12.8MiB (13.4MB), run=1001-1041msec 00:40:34.442 WRITE: bw=19.2MiB/s (20.1MB/s), 2006KiB/s-9.99MiB/s (2054kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1041msec 00:40:34.442 00:40:34.442 Disk stats (read/write): 00:40:34.442 nvme0n1: ios=1111/1536, merge=0/0, ticks=964/235, in_queue=1199, util=99.40% 00:40:34.442 nvme0n2: ios=1783/2048, merge=0/0, ticks=400/338, in_queue=738, util=86.82% 00:40:34.442 nvme0n3: ios=56/512, merge=0/0, ticks=838/104, in_queue=942, util=96.05% 00:40:34.442 nvme0n4: ios=47/512, merge=0/0, ticks=1124/96, in_queue=1220, util=96.75% 00:40:34.442 16:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:34.442 [global] 00:40:34.442 thread=1 00:40:34.442 invalidate=1 00:40:34.442 rw=write 00:40:34.442 time_based=1 00:40:34.442 runtime=1 00:40:34.442 ioengine=libaio 00:40:34.442 direct=1 00:40:34.442 bs=4096 00:40:34.442 iodepth=128 00:40:34.442 norandommap=0 00:40:34.442 numjobs=1 00:40:34.442 00:40:34.442 verify_dump=1 00:40:34.442 verify_backlog=512 00:40:34.442 verify_state_save=0 00:40:34.442 do_verify=1 00:40:34.442 verify=crc32c-intel 00:40:34.442 [job0] 00:40:34.442 filename=/dev/nvme0n1 00:40:34.442 [job1] 00:40:34.442 filename=/dev/nvme0n2 00:40:34.442 [job2] 00:40:34.442 filename=/dev/nvme0n3 00:40:34.442 [job3] 00:40:34.442 filename=/dev/nvme0n4 00:40:34.442 Could not set queue depth (nvme0n1) 00:40:34.442 Could not set queue depth (nvme0n2) 00:40:34.442 Could not set queue depth (nvme0n3) 00:40:34.442 Could not set queue depth (nvme0n4) 00:40:34.442 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:34.442 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:34.442 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:34.442 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:34.442 fio-3.35 00:40:34.442 Starting 4 threads 00:40:35.815 00:40:35.815 job0: (groupid=0, jobs=1): err= 0: pid=1277432: Sat Dec 14 16:54:05 2024 00:40:35.815 read: IOPS=3286, BW=12.8MiB/s (13.5MB/s)(13.0MiB/1009msec) 00:40:35.815 slat (nsec): min=1480, max=19579k, avg=137936.82, stdev=956205.78 00:40:35.815 clat (usec): min=4358, max=70479, avg=15385.06, stdev=12114.42 00:40:35.815 lat (usec): min=4364, max=70491, avg=15523.00, stdev=12204.82 00:40:35.815 clat percentiles (usec): 00:40:35.815 | 1.00th=[ 6325], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10159], 00:40:35.815 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:40:35.815 | 70.00th=[12125], 80.00th=[14222], 90.00th=[23462], 95.00th=[48497], 00:40:35.815 | 99.00th=[66847], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:40:35.815 | 99.99th=[70779] 00:40:35.815 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:40:35.815 slat (usec): min=2, max=26560, avg=146.22, stdev=967.30 00:40:35.815 clat (usec): min=2795, max=70436, avg=21424.38, stdev=14061.74 00:40:35.815 lat (usec): min=2806, max=70439, avg=21570.59, stdev=14146.82 00:40:35.815 clat percentiles (usec): 00:40:35.815 | 1.00th=[ 4555], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10159], 00:40:35.815 | 30.00th=[10552], 40.00th=[13698], 50.00th=[20579], 60.00th=[21890], 00:40:35.815 | 70.00th=[22152], 80.00th=[25297], 90.00th=[41681], 95.00th=[55313], 00:40:35.815 | 99.00th=[63701], 99.50th=[64226], 99.90th=[68682], 99.95th=[70779], 00:40:35.815 | 99.99th=[70779] 00:40:35.815 bw ( KiB/s): min=12288, max=16384, per=19.91%, avg=14336.00, stdev=2896.31, samples=2 00:40:35.815 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:40:35.815 lat (msec) : 4=0.26%, 10=16.49%, 20=50.07%, 50=27.23%, 100=5.94% 00:40:35.815 cpu : usr=2.78%, sys=4.66%, ctx=339, majf=0, minf=1 00:40:35.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:40:35.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:35.815 issued rwts: total=3316,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:35.815 job1: (groupid=0, jobs=1): err= 0: pid=1277447: Sat Dec 14 16:54:05 2024 00:40:35.815 read: IOPS=5687, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1004msec) 00:40:35.815 slat (nsec): min=1345, max=9323.9k, avg=86443.96, stdev=645832.50 00:40:35.815 clat (usec): min=2512, max=20305, avg=10855.79, stdev=2934.77 00:40:35.815 lat (usec): min=3951, max=20330, avg=10942.24, stdev=2979.99 00:40:35.815 clat percentiles (usec): 00:40:35.815 | 1.00th=[ 6063], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 8848], 00:40:35.815 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10552], 00:40:35.815 | 70.00th=[11863], 80.00th=[13566], 90.00th=[15401], 95.00th=[16581], 00:40:35.815 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19268], 99.95th=[19792], 00:40:35.815 | 99.99th=[20317] 00:40:35.815 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:40:35.815 slat (usec): min=2, max=9820, avg=77.29, stdev=426.07 00:40:35.815 clat (usec): min=1500, max=62829, avg=10608.07, stdev=6863.02 00:40:35.815 lat (usec): min=1513, max=62842, avg=10685.36, stdev=6903.79 00:40:35.815 clat percentiles (usec): 00:40:35.815 | 1.00th=[ 4228], 5.00th=[ 5538], 10.00th=[ 6456], 20.00th=[ 7504], 00:40:35.815 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:40:35.815 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[14877], 00:40:35.815 | 99.00th=[51643], 99.50th=[58983], 99.90th=[62653], 99.95th=[62653], 00:40:35.815 | 99.99th=[62653] 00:40:35.815 bw ( KiB/s): min=23664, max=25096, per=33.86%, avg=24380.00, stdev=1012.58, samples=2 00:40:35.815 iops : min= 5916, max= 6274, avg=6095.00, stdev=253.14, samples=2 00:40:35.815 lat (msec) : 2=0.04%, 4=0.42%, 10=44.91%, 20=52.80%, 50=1.23% 00:40:35.815 lat (msec) : 100=0.59% 00:40:35.815 cpu : usr=4.59%, sys=6.18%, ctx=696, majf=0, minf=1 00:40:35.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:35.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:35.815 issued rwts: total=5710,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:35.815 job2: (groupid=0, jobs=1): err= 0: pid=1277469: Sat Dec 14 16:54:05 2024 00:40:35.815 read: IOPS=5545, BW=21.7MiB/s (22.7MB/s)(21.9MiB/1009msec) 00:40:35.815 slat (nsec): min=1577, max=10737k, avg=90964.74, stdev=757083.06 00:40:35.815 clat (usec): min=1137, max=21577, avg=11923.14, stdev=3034.70 00:40:35.815 lat (usec): min=6729, max=27512, avg=12014.11, stdev=3099.38 00:40:35.815 clat percentiles (usec): 00:40:35.815 | 1.00th=[ 6915], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9765], 00:40:35.815 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10945], 60.00th=[11338], 00:40:35.816 | 70.00th=[12125], 80.00th=[14222], 90.00th=[16909], 95.00th=[18744], 00:40:35.816 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21627], 99.95th=[21627], 00:40:35.816 | 99.99th=[21627] 00:40:35.816 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:40:35.816 slat (usec): min=2, max=18116, avg=79.95, stdev=583.79 00:40:35.816 clat (usec): min=2479, max=21395, avg=10442.15, stdev=2381.69 00:40:35.816 lat (usec): min=2490, max=21403, avg=10522.10, stdev=2405.93 00:40:35.816 clat percentiles (usec): 00:40:35.816 | 1.00th=[ 5407], 5.00th=[ 6390], 10.00th=[ 7177], 20.00th=[ 8586], 00:40:35.816 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[10814], 60.00th=[11207], 00:40:35.816 | 70.00th=[11469], 80.00th=[11600], 90.00th=[13829], 95.00th=[14615], 00:40:35.816 | 99.00th=[16712], 99.50th=[17695], 99.90th=[20841], 99.95th=[21103], 00:40:35.816 | 99.99th=[21365] 00:40:35.816 bw ( KiB/s): min=22400, max=22656, per=31.29%, avg=22528.00, stdev=181.02, samples=2 00:40:35.816 iops : min= 5600, max= 5664, avg=5632.00, stdev=45.25, samples=2 00:40:35.816 lat (msec) : 2=0.01%, 4=0.10%, 10=28.47%, 20=69.89%, 50=1.54% 00:40:35.816 cpu : usr=3.77%, sys=8.13%, ctx=412, majf=0, minf=1 00:40:35.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:40:35.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:35.816 issued rwts: total=5595,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:35.816 job3: (groupid=0, jobs=1): err= 0: pid=1277476: Sat Dec 14 16:54:05 2024 00:40:35.816 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:40:35.816 slat (nsec): min=1432, max=19385k, avg=139603.50, stdev=963400.71 00:40:35.816 clat (usec): min=4295, max=32385, avg=16153.57, stdev=5171.39 00:40:35.816 lat (usec): min=4305, max=32388, avg=16293.17, stdev=5232.83 00:40:35.816 clat percentiles (usec): 00:40:35.816 | 1.00th=[ 7308], 5.00th=[12125], 10.00th=[12256], 20.00th=[12518], 00:40:35.816 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[15139], 00:40:35.816 | 70.00th=[18220], 80.00th=[21103], 90.00th=[22938], 95.00th=[27132], 00:40:35.816 | 99.00th=[30802], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:40:35.816 | 99.99th=[32375] 00:40:35.816 write: IOPS=2854, BW=11.1MiB/s (11.7MB/s)(11.3MiB/1014msec); 0 zone resets 00:40:35.816 slat (usec): min=2, max=11495, avg=217.67, stdev=1006.56 00:40:35.816 clat (msec): min=3, max=108, avg=30.07, stdev=23.01 00:40:35.816 lat (msec): min=3, max=108, avg=30.29, stdev=23.14 00:40:35.816 clat percentiles (msec): 00:40:35.816 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 15], 00:40:35.816 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 23], 00:40:35.816 | 70.00th=[ 30], 80.00th=[ 41], 90.00th=[ 63], 95.00th=[ 93], 00:40:35.816 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 109], 00:40:35.816 | 99.99th=[ 109] 00:40:35.816 bw ( KiB/s): min= 9536, max=12592, per=15.36%, avg=11064.00, stdev=2160.92, samples=2 00:40:35.816 iops : min= 2384, max= 3148, avg=2766.00, stdev=540.23, samples=2 00:40:35.816 lat (msec) : 4=0.22%, 10=5.19%, 20=43.89%, 50=44.28%, 100=5.02% 00:40:35.816 lat (msec) : 250=1.39% 00:40:35.816 cpu : usr=1.88%, sys=3.95%, ctx=344, majf=0, minf=1 00:40:35.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:40:35.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:35.816 issued rwts: total=2560,2894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:35.816 00:40:35.816 Run status group 0 (all jobs): 00:40:35.816 READ: bw=66.2MiB/s (69.4MB/s), 9.86MiB/s-22.2MiB/s (10.3MB/s-23.3MB/s), io=67.1MiB (70.4MB), run=1004-1014msec 00:40:35.816 WRITE: bw=70.3MiB/s (73.7MB/s), 11.1MiB/s-23.9MiB/s (11.7MB/s-25.1MB/s), io=71.3MiB (74.8MB), run=1004-1014msec 00:40:35.816 00:40:35.816 Disk stats (read/write): 00:40:35.816 nvme0n1: ios=2735/3072, merge=0/0, ticks=41480/54080, in_queue=95560, util=84.07% 00:40:35.816 nvme0n2: ios=4641/4999, merge=0/0, ticks=49221/52936, in_queue=102157, util=97.13% 00:40:35.816 nvme0n3: ios=4570/4608, merge=0/0, ticks=50225/45308, in_queue=95533, util=97.87% 00:40:35.816 nvme0n4: ios=2066/2263, merge=0/0, ticks=33586/63364, in_queue=96950, util=97.95% 00:40:35.816 16:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:35.816 [global] 00:40:35.816 thread=1 00:40:35.816 invalidate=1 00:40:35.816 rw=randwrite 00:40:35.816 time_based=1 00:40:35.816 runtime=1 00:40:35.816 ioengine=libaio 00:40:35.816 direct=1 00:40:35.816 bs=4096 00:40:35.816 iodepth=128 00:40:35.816 norandommap=0 00:40:35.816 numjobs=1 00:40:35.816 00:40:35.816 verify_dump=1 00:40:35.816 verify_backlog=512 00:40:35.816 verify_state_save=0 00:40:35.816 do_verify=1 00:40:35.816 verify=crc32c-intel 00:40:35.816 [job0] 00:40:35.816 filename=/dev/nvme0n1 00:40:35.816 [job1] 00:40:35.816 filename=/dev/nvme0n2 00:40:35.816 [job2] 00:40:35.816 filename=/dev/nvme0n3 00:40:35.816 [job3] 00:40:35.816 filename=/dev/nvme0n4 00:40:35.816 Could not set queue depth (nvme0n1) 00:40:35.816 Could not set queue depth (nvme0n2) 00:40:35.816 Could not set queue depth (nvme0n3) 00:40:35.816 Could not set queue depth (nvme0n4) 00:40:36.073 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:36.074 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:36.074 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:36.074 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:36.074 fio-3.35 00:40:36.074 Starting 4 threads 00:40:37.447 00:40:37.447 job0: (groupid=0, jobs=1): err= 0: pid=1277851: Sat Dec 14 16:54:07 2024 00:40:37.447 read: IOPS=5357, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1004msec) 00:40:37.447 slat (nsec): min=1360, max=8939.4k, avg=82271.89, stdev=509918.67 00:40:37.447 clat (usec): min=1260, max=18870, avg=10269.04, stdev=1780.76 00:40:37.447 lat (usec): min=4391, max=24156, avg=10351.32, stdev=1821.02 00:40:37.447 clat percentiles (usec): 00:40:37.447 | 1.00th=[ 5538], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9241], 00:40:37.447 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:40:37.447 | 70.00th=[10683], 80.00th=[11076], 90.00th=[12125], 95.00th=[13698], 00:40:37.447 | 99.00th=[16909], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:40:37.447 | 99.99th=[18744] 00:40:37.447 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:40:37.447 slat (usec): min=2, max=22583, avg=93.90, stdev=692.19 00:40:37.447 clat (usec): min=2562, max=60081, avg=12416.01, stdev=8360.97 00:40:37.447 lat (usec): min=2572, max=60116, avg=12509.91, stdev=8433.24 00:40:37.447 clat percentiles (usec): 00:40:37.447 | 1.00th=[ 5669], 5.00th=[ 7177], 10.00th=[ 8455], 20.00th=[ 9503], 00:40:37.447 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:40:37.448 | 70.00th=[10552], 80.00th=[11731], 90.00th=[13304], 95.00th=[36439], 00:40:37.448 | 99.00th=[47973], 99.50th=[49546], 99.90th=[50594], 99.95th=[57410], 00:40:37.448 | 99.99th=[60031] 00:40:37.448 bw ( KiB/s): min=20472, max=24584, per=32.20%, avg=22528.00, stdev=2907.62, samples=2 00:40:37.448 iops : min= 5118, max= 6146, avg=5632.00, stdev=726.91, samples=2 00:40:37.448 lat (msec) : 2=0.01%, 4=0.12%, 10=38.30%, 20=57.76%, 50=3.76% 00:40:37.448 lat (msec) : 100=0.05% 00:40:37.448 cpu : usr=3.19%, sys=5.68%, ctx=670, majf=0, minf=1 00:40:37.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:40:37.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:37.448 issued rwts: total=5379,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:37.448 job1: (groupid=0, jobs=1): err= 0: pid=1277864: Sat Dec 14 16:54:07 2024 00:40:37.448 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:40:37.448 slat (nsec): min=1252, max=21736k, avg=121744.78, stdev=878506.95 00:40:37.448 clat (usec): min=3710, max=61806, avg=14568.77, stdev=8756.96 00:40:37.448 lat (usec): min=3714, max=73498, avg=14690.52, stdev=8851.73 00:40:37.448 clat percentiles (usec): 00:40:37.448 | 1.00th=[ 5211], 5.00th=[ 7308], 10.00th=[ 8291], 20.00th=[ 9634], 00:40:37.448 | 30.00th=[10552], 40.00th=[11600], 50.00th=[12780], 60.00th=[13173], 00:40:37.448 | 70.00th=[13960], 80.00th=[15139], 90.00th=[23987], 95.00th=[33424], 00:40:37.448 | 99.00th=[53216], 99.50th=[57934], 99.90th=[59507], 99.95th=[59507], 00:40:37.448 | 99.99th=[61604] 00:40:37.448 write: IOPS=4416, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1004msec); 0 zone resets 00:40:37.448 slat (usec): min=2, max=12844, avg=107.51, stdev=657.88 00:40:37.448 clat (usec): min=2691, max=78567, avg=15216.87, stdev=10080.86 00:40:37.448 lat (usec): min=3368, max=78570, avg=15324.37, stdev=10115.55 00:40:37.448 clat percentiles (usec): 00:40:37.448 | 1.00th=[ 4948], 5.00th=[ 7308], 10.00th=[ 8586], 20.00th=[10028], 00:40:37.448 | 30.00th=[10290], 40.00th=[10814], 50.00th=[12125], 60.00th=[13829], 00:40:37.448 | 70.00th=[14222], 80.00th=[17957], 90.00th=[22152], 95.00th=[36963], 00:40:37.448 | 99.00th=[54264], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:40:37.448 | 99.99th=[78119] 00:40:37.448 bw ( KiB/s): min=15448, max=19008, per=24.63%, avg=17228.00, stdev=2517.30, samples=2 00:40:37.448 iops : min= 3862, max= 4752, avg=4307.00, stdev=629.33, samples=2 00:40:37.448 lat (msec) : 4=0.41%, 10=20.77%, 20=63.32%, 50=13.26%, 100=2.24% 00:40:37.448 cpu : usr=2.59%, sys=4.19%, ctx=407, majf=0, minf=1 00:40:37.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:40:37.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:37.448 issued rwts: total=4096,4434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:37.448 job2: (groupid=0, jobs=1): err= 0: pid=1277879: Sat Dec 14 16:54:07 2024 00:40:37.448 read: IOPS=4078, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:40:37.448 slat (nsec): min=1092, max=40817k, avg=118234.17, stdev=1039022.03 00:40:37.448 clat (usec): min=2687, max=68228, avg=13763.49, stdev=7534.91 00:40:37.448 lat (usec): min=2692, max=73552, avg=13881.73, stdev=7635.45 00:40:37.448 clat percentiles (usec): 00:40:37.448 | 1.00th=[ 7373], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10552], 00:40:37.448 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11863], 00:40:37.448 | 70.00th=[13042], 80.00th=[15008], 90.00th=[20055], 95.00th=[24773], 00:40:37.448 | 99.00th=[62129], 99.50th=[64226], 99.90th=[66847], 99.95th=[67634], 00:40:37.448 | 99.99th=[68682] 00:40:37.448 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:40:37.448 slat (nsec): min=1866, max=27530k, avg=106814.10, stdev=782251.88 00:40:37.448 clat (usec): min=1115, max=73529, avg=15434.98, stdev=11057.33 00:40:37.448 lat (usec): min=1125, max=73532, avg=15541.80, stdev=11088.34 00:40:37.448 clat percentiles (usec): 00:40:37.448 | 1.00th=[ 4047], 5.00th=[ 6718], 10.00th=[ 7439], 20.00th=[10159], 00:40:37.448 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:40:37.448 | 70.00th=[12780], 80.00th=[21103], 90.00th=[26346], 95.00th=[41157], 00:40:37.448 | 99.00th=[61080], 99.50th=[61604], 99.90th=[65799], 99.95th=[65799], 00:40:37.448 | 99.99th=[73925] 00:40:37.448 bw ( KiB/s): min=13880, max=22016, per=25.66%, avg=17948.00, stdev=5753.02, samples=2 00:40:37.448 iops : min= 3470, max= 5504, avg=4487.00, stdev=1438.26, samples=2 00:40:37.448 lat (msec) : 2=0.07%, 4=0.41%, 10=14.05%, 20=69.36%, 50=13.43% 00:40:37.448 lat (msec) : 100=2.67% 00:40:37.448 cpu : usr=2.39%, sys=4.28%, ctx=427, majf=0, minf=1 00:40:37.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:40:37.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:37.448 issued rwts: total=4103,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:37.448 job3: (groupid=0, jobs=1): err= 0: pid=1277883: Sat Dec 14 16:54:07 2024 00:40:37.448 read: IOPS=3424, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1044msec) 00:40:37.448 slat (nsec): min=1437, max=13435k, avg=115527.74, stdev=856950.80 00:40:37.448 clat (usec): min=4905, max=55486, avg=16199.41, stdev=8881.33 00:40:37.448 lat (usec): min=4912, max=59216, avg=16314.94, stdev=8928.38 00:40:37.448 clat percentiles (usec): 00:40:37.448 | 1.00th=[ 7308], 5.00th=[ 8848], 10.00th=[11338], 20.00th=[11863], 00:40:37.448 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13304], 60.00th=[14353], 00:40:37.448 | 70.00th=[15533], 80.00th=[17695], 90.00th=[23200], 95.00th=[33424], 00:40:37.448 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:40:37.448 | 99.99th=[55313] 00:40:37.448 write: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1044msec); 0 zone resets 00:40:37.448 slat (usec): min=2, max=11630, avg=150.22, stdev=689.17 00:40:37.448 clat (usec): min=1602, max=48242, avg=20651.49, stdev=10217.37 00:40:37.448 lat (usec): min=1614, max=48249, avg=20801.71, stdev=10281.50 00:40:37.448 clat percentiles (usec): 00:40:37.448 | 1.00th=[ 4424], 5.00th=[ 7701], 10.00th=[ 9896], 20.00th=[11600], 00:40:37.448 | 30.00th=[12518], 40.00th=[14484], 50.00th=[21103], 60.00th=[21890], 00:40:37.448 | 70.00th=[23200], 80.00th=[31327], 90.00th=[36963], 95.00th=[38011], 00:40:37.448 | 99.00th=[45351], 99.50th=[45876], 99.90th=[47973], 99.95th=[48497], 00:40:37.448 | 99.99th=[48497] 00:40:37.448 bw ( KiB/s): min=12336, max=16336, per=20.49%, avg=14336.00, stdev=2828.43, samples=2 00:40:37.448 iops : min= 3084, max= 4084, avg=3584.00, stdev=707.11, samples=2 00:40:37.448 lat (msec) : 2=0.14%, 4=0.14%, 10=7.96%, 20=58.33%, 50=31.78% 00:40:37.448 lat (msec) : 100=1.65% 00:40:37.448 cpu : usr=3.07%, sys=4.41%, ctx=394, majf=0, minf=1 00:40:37.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:40:37.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:37.448 issued rwts: total=3575,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:37.448 00:40:37.448 Run status group 0 (all jobs): 00:40:37.448 READ: bw=64.2MiB/s (67.3MB/s), 13.4MiB/s-20.9MiB/s (14.0MB/s-21.9MB/s), io=67.0MiB (70.3MB), run=1004-1044msec 00:40:37.448 WRITE: bw=68.3MiB/s (71.6MB/s), 13.4MiB/s-21.9MiB/s (14.1MB/s-23.0MB/s), io=71.3MiB (74.8MB), run=1004-1044msec 00:40:37.448 00:40:37.448 Disk stats (read/write): 00:40:37.448 nvme0n1: ios=4490/4608, merge=0/0, ticks=20432/22738, in_queue=43170, util=97.80% 00:40:37.448 nvme0n2: ios=3133/3584, merge=0/0, ticks=23813/29892, in_queue=53705, util=96.65% 00:40:37.448 nvme0n3: ios=3836/4096, merge=0/0, ticks=38558/37793, in_queue=76351, util=86.38% 00:40:37.448 nvme0n4: ios=3130/3118, merge=0/0, ticks=44027/59849, in_queue=103876, util=97.90% 00:40:37.448 16:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:37.448 16:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1278170 00:40:37.448 16:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:37.448 16:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:37.448 [global] 00:40:37.448 thread=1 00:40:37.448 invalidate=1 00:40:37.448 rw=read 00:40:37.448 time_based=1 00:40:37.448 runtime=10 00:40:37.448 ioengine=libaio 00:40:37.448 direct=1 00:40:37.448 bs=4096 00:40:37.448 iodepth=1 00:40:37.448 norandommap=1 00:40:37.448 numjobs=1 00:40:37.448 00:40:37.448 [job0] 00:40:37.448 filename=/dev/nvme0n1 00:40:37.448 [job1] 00:40:37.448 filename=/dev/nvme0n2 00:40:37.448 [job2] 00:40:37.448 filename=/dev/nvme0n3 00:40:37.449 [job3] 00:40:37.449 filename=/dev/nvme0n4 00:40:37.449 Could not set queue depth (nvme0n1) 00:40:37.449 Could not set queue depth (nvme0n2) 00:40:37.449 Could not set queue depth (nvme0n3) 00:40:37.449 Could not set queue depth (nvme0n4) 00:40:37.707 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:37.707 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:37.707 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:37.707 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:37.707 fio-3.35 00:40:37.707 Starting 4 threads 00:40:40.987 16:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:40.987 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37355520, buflen=4096 00:40:40.987 fio: pid=1278556, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:40.987 16:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:40.987 16:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:40.987 16:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:40.987 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=33161216, buflen=4096 00:40:40.987 fio: pid=1278550, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:40.987 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45707264, buflen=4096 00:40:40.987 fio: pid=1278524, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:40.987 16:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:40.987 16:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:41.246 16:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:41.246 16:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:41.246 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=31711232, buflen=4096 00:40:41.246 fio: pid=1278536, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:41.246 00:40:41.246 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1278524: Sat Dec 14 16:54:11 2024 00:40:41.246 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(43.6MiB/3154msec) 00:40:41.246 slat (usec): min=7, max=21057, avg=11.60, stdev=219.13 00:40:41.246 clat (usec): min=182, max=1601, avg=267.21, stdev=44.37 00:40:41.246 lat (usec): min=190, max=21408, avg=278.81, stdev=224.71 00:40:41.246 clat percentiles (usec): 00:40:41.246 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 221], 00:40:41.246 | 30.00th=[ 243], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 285], 00:40:41.246 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 318], 00:40:41.246 | 99.00th=[ 351], 99.50th=[ 408], 99.90th=[ 502], 99.95th=[ 537], 00:40:41.246 | 99.99th=[ 1004] 00:40:41.246 bw ( KiB/s): min=13160, max=17141, per=32.98%, avg=14126.17, stdev=1534.22, samples=6 00:40:41.246 iops : min= 3290, max= 4285, avg=3531.50, stdev=383.46, samples=6 00:40:41.246 lat (usec) : 250=32.37%, 500=67.50%, 750=0.11% 00:40:41.246 lat (msec) : 2=0.02% 00:40:41.246 cpu : usr=1.97%, sys=5.99%, ctx=11162, majf=0, minf=1 00:40:41.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:41.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.246 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.246 issued rwts: total=11160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:41.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:41.246 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1278536: Sat Dec 14 16:54:11 2024 00:40:41.246 read: IOPS=2295, BW=9181KiB/s (9401kB/s)(30.2MiB/3373msec) 00:40:41.246 slat (usec): min=6, max=694, avg= 9.23, stdev= 8.18 00:40:41.246 clat (usec): min=180, max=41979, avg=421.22, stdev=2452.31 00:40:41.246 lat (usec): min=189, max=42017, avg=430.44, stdev=2454.80 00:40:41.246 clat percentiles (usec): 00:40:41.246 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 251], 00:40:41.246 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:40:41.246 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 334], 00:40:41.246 | 99.00th=[ 396], 99.50th=[ 494], 99.90th=[41157], 99.95th=[41681], 00:40:41.246 | 99.99th=[42206] 00:40:41.246 bw ( KiB/s): min= 106, max=14672, per=24.07%, avg=10311.00, stdev=6140.66, samples=6 00:40:41.246 iops : min= 26, max= 3668, avg=2577.67, stdev=1535.33, samples=6 00:40:41.246 lat (usec) : 250=17.51%, 500=82.04%, 750=0.08% 00:40:41.246 lat (msec) : 50=0.36% 00:40:41.246 cpu : usr=1.04%, sys=3.32%, ctx=7749, majf=0, minf=2 00:40:41.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:41.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.246 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.246 issued rwts: total=7743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:41.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:41.246 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1278550: Sat Dec 14 16:54:11 2024 00:40:41.246 read: IOPS=2729, BW=10.7MiB/s (11.2MB/s)(31.6MiB/2967msec) 00:40:41.246 slat (nsec): min=6408, max=76476, avg=7565.32, stdev=1557.29 00:40:41.246 clat (usec): min=209, max=41883, avg=355.11, stdev=1753.33 00:40:41.246 lat (usec): min=216, max=41900, avg=362.67, stdev=1753.98 00:40:41.246 clat percentiles (usec): 00:40:41.246 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:40:41.246 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:40:41.246 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 351], 00:40:41.246 | 99.00th=[ 433], 99.50th=[ 441], 99.90th=[41157], 99.95th=[41157], 00:40:41.246 | 99.99th=[41681] 00:40:41.246 bw ( KiB/s): min= 7296, max=14696, per=29.75%, avg=12742.40, stdev=3091.01, samples=5 00:40:41.246 iops : min= 1824, max= 3674, avg=3185.60, stdev=772.75, samples=5 00:40:41.246 lat (usec) : 250=14.68%, 500=85.04%, 750=0.06% 00:40:41.246 lat (msec) : 4=0.01%, 50=0.19% 00:40:41.246 cpu : usr=0.81%, sys=2.53%, ctx=8098, majf=0, minf=2 00:40:41.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:41.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.246 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.246 issued rwts: total=8097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:41.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:41.246 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1278556: Sat Dec 14 16:54:11 2024 00:40:41.246 read: IOPS=3344, BW=13.1MiB/s (13.7MB/s)(35.6MiB/2727msec) 00:40:41.246 slat (nsec): min=7339, max=45429, avg=9148.54, stdev=1565.92 00:40:41.246 clat (usec): min=184, max=669, avg=285.06, stdev=36.94 00:40:41.246 lat (usec): min=193, max=677, avg=294.21, stdev=37.19 00:40:41.246 clat percentiles (usec): 00:40:41.246 | 1.00th=[ 204], 5.00th=[ 221], 10.00th=[ 231], 20.00th=[ 269], 00:40:41.246 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:40:41.246 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 330], 00:40:41.246 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 506], 99.95th=[ 545], 00:40:41.246 | 99.99th=[ 668] 00:40:41.246 bw ( KiB/s): min=12608, max=14256, per=30.90%, avg=13235.20, stdev=616.72, samples=5 00:40:41.246 iops : min= 3152, max= 3564, avg=3308.80, stdev=154.18, samples=5 00:40:41.246 lat (usec) : 250=15.55%, 500=84.31%, 750=0.13% 00:40:41.246 cpu : usr=1.58%, sys=6.02%, ctx=9121, majf=0, minf=2 00:40:41.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:41.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.246 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.246 issued rwts: total=9121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:41.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:41.246 00:40:41.246 Run status group 0 (all jobs): 00:40:41.246 READ: bw=41.8MiB/s (43.9MB/s), 9181KiB/s-13.8MiB/s (9401kB/s-14.5MB/s), io=141MiB (148MB), run=2727-3373msec 00:40:41.246 00:40:41.246 Disk stats (read/write): 00:40:41.246 nvme0n1: ios=11018/0, merge=0/0, ticks=2819/0, in_queue=2819, util=94.76% 00:40:41.246 nvme0n2: ios=7775/0, merge=0/0, ticks=4046/0, in_queue=4046, util=99.94% 00:40:41.246 nvme0n3: ios=8093/0, merge=0/0, ticks=2694/0, in_queue=2694, util=96.52% 00:40:41.246 nvme0n4: ios=8690/0, merge=0/0, ticks=2422/0, in_queue=2422, util=96.48% 00:40:41.504 16:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:41.504 16:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:41.504 16:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:41.504 16:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:41.765 16:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:41.765 16:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:42.086 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:42.086 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:42.374 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:42.374 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1278170 00:40:42.374 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:42.374 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:42.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:42.374 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:42.374 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:42.374 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:42.374 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:42.374 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:42.374 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:42.375 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:42.375 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:42.375 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:42.375 nvmf hotplug test: fio failed as expected 00:40:42.375 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:42.672 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:42.673 rmmod nvme_tcp 00:40:42.673 rmmod nvme_fabrics 00:40:42.673 rmmod nvme_keyring 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1275465 ']' 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1275465 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1275465 ']' 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1275465 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1275465 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1275465' 00:40:42.673 killing process with pid 1275465 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1275465 00:40:42.673 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1275465 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:42.932 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:44.909 16:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:44.909 00:40:44.909 real 0m25.813s 00:40:44.909 user 1m31.691s 00:40:44.909 sys 0m11.384s 00:40:44.909 16:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:44.909 16:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:44.909 ************************************ 00:40:44.909 END TEST nvmf_fio_target 00:40:44.909 ************************************ 00:40:44.909 16:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:44.909 16:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:44.909 16:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:44.909 16:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:45.169 ************************************ 00:40:45.170 START TEST nvmf_bdevio 00:40:45.170 ************************************ 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:45.170 * Looking for test storage... 00:40:45.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:45.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.170 --rc genhtml_branch_coverage=1 00:40:45.170 --rc genhtml_function_coverage=1 00:40:45.170 --rc genhtml_legend=1 00:40:45.170 --rc geninfo_all_blocks=1 00:40:45.170 --rc geninfo_unexecuted_blocks=1 00:40:45.170 00:40:45.170 ' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:45.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.170 --rc genhtml_branch_coverage=1 00:40:45.170 --rc genhtml_function_coverage=1 00:40:45.170 --rc genhtml_legend=1 00:40:45.170 --rc geninfo_all_blocks=1 00:40:45.170 --rc geninfo_unexecuted_blocks=1 00:40:45.170 00:40:45.170 ' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:45.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.170 --rc genhtml_branch_coverage=1 00:40:45.170 --rc genhtml_function_coverage=1 00:40:45.170 --rc genhtml_legend=1 00:40:45.170 --rc geninfo_all_blocks=1 00:40:45.170 --rc geninfo_unexecuted_blocks=1 00:40:45.170 00:40:45.170 ' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:45.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.170 --rc genhtml_branch_coverage=1 00:40:45.170 --rc genhtml_function_coverage=1 00:40:45.170 --rc genhtml_legend=1 00:40:45.170 --rc geninfo_all_blocks=1 00:40:45.170 --rc geninfo_unexecuted_blocks=1 00:40:45.170 00:40:45.170 ' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:45.170 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:51.743 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:51.744 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:51.744 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:51.744 Found net devices under 0000:af:00.0: cvl_0_0 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:51.744 Found net devices under 0000:af:00.1: cvl_0_1 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:51.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:51.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:40:51.744 00:40:51.744 --- 10.0.0.2 ping statistics --- 00:40:51.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:51.744 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:51.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:51.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:40:51.744 00:40:51.744 --- 10.0.0.1 ping statistics --- 00:40:51.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:51.744 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:51.744 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1282913 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1282913 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1282913 ']' 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:51.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:51.744 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:51.744 [2024-12-14 16:54:21.103376] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:51.744 [2024-12-14 16:54:21.104584] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:51.744 [2024-12-14 16:54:21.104627] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:51.744 [2024-12-14 16:54:21.185779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:51.744 [2024-12-14 16:54:21.208333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:51.744 [2024-12-14 16:54:21.208372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:51.745 [2024-12-14 16:54:21.208379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:51.745 [2024-12-14 16:54:21.208385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:51.745 [2024-12-14 16:54:21.208390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:51.745 [2024-12-14 16:54:21.209822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:40:51.745 [2024-12-14 16:54:21.209929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:40:51.745 [2024-12-14 16:54:21.210035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:51.745 [2024-12-14 16:54:21.210036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:40:51.745 [2024-12-14 16:54:21.272973] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:51.745 [2024-12-14 16:54:21.274096] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:51.745 [2024-12-14 16:54:21.274224] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:51.745 [2024-12-14 16:54:21.274611] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:51.745 [2024-12-14 16:54:21.274642] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:51.745 [2024-12-14 16:54:21.338729] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:51.745 Malloc0 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:51.745 [2024-12-14 16:54:21.430930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:51.745 { 00:40:51.745 "params": { 00:40:51.745 "name": "Nvme$subsystem", 00:40:51.745 "trtype": "$TEST_TRANSPORT", 00:40:51.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:51.745 "adrfam": "ipv4", 00:40:51.745 "trsvcid": "$NVMF_PORT", 00:40:51.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:51.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:51.745 "hdgst": ${hdgst:-false}, 00:40:51.745 "ddgst": ${ddgst:-false} 00:40:51.745 }, 00:40:51.745 "method": "bdev_nvme_attach_controller" 00:40:51.745 } 00:40:51.745 EOF 00:40:51.745 )") 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:51.745 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:51.745 "params": { 00:40:51.745 "name": "Nvme1", 00:40:51.745 "trtype": "tcp", 00:40:51.745 "traddr": "10.0.0.2", 00:40:51.745 "adrfam": "ipv4", 00:40:51.745 "trsvcid": "4420", 00:40:51.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:51.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:51.745 "hdgst": false, 00:40:51.745 "ddgst": false 00:40:51.745 }, 00:40:51.745 "method": "bdev_nvme_attach_controller" 00:40:51.745 }' 00:40:51.745 [2024-12-14 16:54:21.482734] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:51.745 [2024-12-14 16:54:21.482783] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282945 ] 00:40:51.745 [2024-12-14 16:54:21.559296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:51.745 [2024-12-14 16:54:21.584255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:51.745 [2024-12-14 16:54:21.584369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:51.745 [2024-12-14 16:54:21.584369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:51.745 I/O targets: 00:40:51.745 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:51.745 00:40:51.745 00:40:51.745 CUnit - A unit testing framework for C - Version 2.1-3 00:40:51.745 http://cunit.sourceforge.net/ 00:40:51.745 00:40:51.745 00:40:51.745 Suite: bdevio tests on: Nvme1n1 00:40:51.745 Test: blockdev write read block ...passed 00:40:52.003 Test: blockdev write zeroes read block ...passed 00:40:52.003 Test: blockdev write zeroes read no split ...passed 00:40:52.003 Test: blockdev write zeroes read split ...passed 00:40:52.003 Test: blockdev write zeroes read split partial ...passed 00:40:52.003 Test: blockdev reset ...[2024-12-14 16:54:21.880363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:52.003 [2024-12-14 16:54:21.880424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ed340 (9): Bad file descriptor 00:40:52.003 [2024-12-14 16:54:21.933716] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:52.003 passed 00:40:52.003 Test: blockdev write read 8 blocks ...passed 00:40:52.003 Test: blockdev write read size > 128k ...passed 00:40:52.003 Test: blockdev write read invalid size ...passed 00:40:52.003 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:52.003 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:52.003 Test: blockdev write read max offset ...passed 00:40:52.003 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:52.003 Test: blockdev writev readv 8 blocks ...passed 00:40:52.003 Test: blockdev writev readv 30 x 1block ...passed 00:40:52.262 Test: blockdev writev readv block ...passed 00:40:52.262 Test: blockdev writev readv size > 128k ...passed 00:40:52.262 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:52.262 Test: blockdev comparev and writev ...[2024-12-14 16:54:22.104461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:52.262 [2024-12-14 16:54:22.104495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:52.262 [2024-12-14 16:54:22.104510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:52.262 [2024-12-14 16:54:22.104517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:52.262 [2024-12-14 16:54:22.104806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:52.262 [2024-12-14 16:54:22.104817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:52.262 [2024-12-14 16:54:22.104828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:52.262 [2024-12-14 16:54:22.104835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:52.262 [2024-12-14 16:54:22.105117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:52.262 [2024-12-14 16:54:22.105127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:52.262 [2024-12-14 16:54:22.105138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:52.262 [2024-12-14 16:54:22.105144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:52.262 [2024-12-14 16:54:22.105432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:52.262 [2024-12-14 16:54:22.105442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:52.262 [2024-12-14 16:54:22.105454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:52.262 [2024-12-14 16:54:22.105461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:52.262 passed 00:40:52.262 Test: blockdev nvme passthru rw ...passed 00:40:52.262 Test: blockdev nvme passthru vendor specific ...[2024-12-14 16:54:22.187790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:52.262 [2024-12-14 16:54:22.187809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:52.262 [2024-12-14 16:54:22.187921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:52.262 [2024-12-14 16:54:22.187931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:52.262 [2024-12-14 16:54:22.188049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:52.262 [2024-12-14 16:54:22.188058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:52.262 [2024-12-14 16:54:22.188176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:52.262 [2024-12-14 16:54:22.188186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:52.262 passed 00:40:52.262 Test: blockdev nvme admin passthru ...passed 00:40:52.262 Test: blockdev copy ...passed 00:40:52.262 00:40:52.262 Run Summary: Type Total Ran Passed Failed Inactive 00:40:52.262 suites 1 1 n/a 0 0 00:40:52.262 tests 23 23 23 0 0 00:40:52.262 asserts 152 152 152 0 n/a 00:40:52.262 00:40:52.262 Elapsed time = 0.947 seconds 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:52.521 rmmod nvme_tcp 00:40:52.521 rmmod nvme_fabrics 00:40:52.521 rmmod nvme_keyring 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1282913 ']' 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1282913 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1282913 ']' 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1282913 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1282913 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1282913' 00:40:52.521 killing process with pid 1282913 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1282913 00:40:52.521 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1282913 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:52.780 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:54.685 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:54.685 00:40:54.685 real 0m9.734s 00:40:54.685 user 0m8.019s 00:40:54.685 sys 0m5.092s 00:40:54.685 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:54.685 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:54.685 ************************************ 00:40:54.685 END TEST nvmf_bdevio 00:40:54.685 ************************************ 00:40:54.944 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:54.944 00:40:54.944 real 4m30.762s 00:40:54.944 user 9m4.931s 00:40:54.944 sys 1m49.850s 00:40:54.944 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:54.944 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:54.944 ************************************ 00:40:54.944 END TEST nvmf_target_core_interrupt_mode 00:40:54.944 ************************************ 00:40:54.944 16:54:24 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:54.944 16:54:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:54.944 16:54:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:54.944 16:54:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:54.944 ************************************ 00:40:54.944 START TEST nvmf_interrupt 00:40:54.944 ************************************ 00:40:54.944 16:54:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:54.944 * Looking for test storage... 00:40:54.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:54.944 16:54:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:54.944 16:54:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:40:54.944 16:54:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:54.944 16:54:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:54.944 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:54.944 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:54.944 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:54.944 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:54.944 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:54.944 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:54.944 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:54.944 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:54.944 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:54.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.945 --rc genhtml_branch_coverage=1 00:40:54.945 --rc genhtml_function_coverage=1 00:40:54.945 --rc genhtml_legend=1 00:40:54.945 --rc geninfo_all_blocks=1 00:40:54.945 --rc geninfo_unexecuted_blocks=1 00:40:54.945 00:40:54.945 ' 00:40:54.945 16:54:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:54.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.945 --rc genhtml_branch_coverage=1 00:40:54.945 --rc genhtml_function_coverage=1 00:40:54.945 --rc genhtml_legend=1 00:40:54.945 --rc geninfo_all_blocks=1 00:40:54.945 --rc geninfo_unexecuted_blocks=1 00:40:54.945 00:40:54.945 ' 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:55.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.204 --rc genhtml_branch_coverage=1 00:40:55.204 --rc genhtml_function_coverage=1 00:40:55.204 --rc genhtml_legend=1 00:40:55.204 --rc geninfo_all_blocks=1 00:40:55.204 --rc geninfo_unexecuted_blocks=1 00:40:55.204 00:40:55.204 ' 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:55.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.204 --rc genhtml_branch_coverage=1 00:40:55.204 --rc genhtml_function_coverage=1 00:40:55.204 --rc genhtml_legend=1 00:40:55.204 --rc geninfo_all_blocks=1 00:40:55.204 --rc geninfo_unexecuted_blocks=1 00:40:55.204 00:40:55.204 ' 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:55.204 16:54:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:01.775 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:01.776 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:01.776 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:01.776 Found net devices under 0000:af:00.0: cvl_0_0 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:01.776 Found net devices under 0000:af:00.1: cvl_0_1 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:01.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:01.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:41:01.776 00:41:01.776 --- 10.0.0.2 ping statistics --- 00:41:01.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.776 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:01.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:01.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:41:01.776 00:41:01.776 --- 10.0.0.1 ping statistics --- 00:41:01.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.776 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1286534 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1286534 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1286534 ']' 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:01.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:01.776 16:54:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.776 [2024-12-14 16:54:31.012128] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:01.776 [2024-12-14 16:54:31.013085] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:01.777 [2024-12-14 16:54:31.013119] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:01.777 [2024-12-14 16:54:31.091800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:01.777 [2024-12-14 16:54:31.113610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:01.777 [2024-12-14 16:54:31.113650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:01.777 [2024-12-14 16:54:31.113671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:01.777 [2024-12-14 16:54:31.113679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:01.777 [2024-12-14 16:54:31.113684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:01.777 [2024-12-14 16:54:31.114820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:01.777 [2024-12-14 16:54:31.114821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:01.777 [2024-12-14 16:54:31.178265] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:01.777 [2024-12-14 16:54:31.178769] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:01.777 [2024-12-14 16:54:31.178955] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:01.777 5000+0 records in 00:41:01.777 5000+0 records out 00:41:01.777 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0174919 s, 585 MB/s 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.777 AIO0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.777 [2024-12-14 16:54:31.303626] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.777 [2024-12-14 16:54:31.343991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1286534 0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286534 0 idle 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286534 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286534 -w 256 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286534 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0' 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286534 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1286534 1 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286534 1 idle 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286534 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286534 -w 256 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286580 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286580 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1286679 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1286534 0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1286534 0 busy 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286534 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286534 -w 256 00:41:01.777 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286534 root 20 0 128.2g 46848 33792 R 68.8 0.1 0:00.33 reactor_0' 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286534 root 20 0 128.2g 46848 33792 R 68.8 0.1 0:00.33 reactor_0 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=68.8 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=68 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1286534 1 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1286534 1 busy 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286534 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286534 -w 256 00:41:02.036 16:54:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:02.036 16:54:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286580 root 20 0 128.2g 46848 33792 R 93.3 0.1 0:00.23 reactor_1' 00:41:02.036 16:54:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286580 root 20 0 128.2g 46848 33792 R 93.3 0.1 0:00.23 reactor_1 00:41:02.036 16:54:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:02.036 16:54:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:02.036 16:54:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:41:02.036 16:54:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:41:02.036 16:54:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:02.036 16:54:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:02.036 16:54:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:02.036 16:54:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:02.036 16:54:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1286679 00:41:12.007 Initializing NVMe Controllers 00:41:12.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:12.007 Controller IO queue size 256, less than required. 00:41:12.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:12.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:12.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:12.007 Initialization complete. Launching workers. 00:41:12.007 ======================================================== 00:41:12.007 Latency(us) 00:41:12.007 Device Information : IOPS MiB/s Average min max 00:41:12.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16389.48 64.02 15628.19 3045.92 33590.79 00:41:12.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16541.38 64.61 15481.04 7429.29 56320.85 00:41:12.007 ======================================================== 00:41:12.007 Total : 32930.85 128.64 15554.27 3045.92 56320.85 00:41:12.007 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1286534 0 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286534 0 idle 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286534 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286534 -w 256 00:41:12.007 16:54:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:12.266 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286534 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:20.22 reactor_0' 00:41:12.266 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286534 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:20.22 reactor_0 00:41:12.266 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:12.266 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:12.266 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:41:12.266 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:12.266 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:12.266 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1286534 1 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286534 1 idle 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286534 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286534 -w 256 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286580 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286580 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:12.267 16:54:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:12.835 16:54:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:12.835 16:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:12.835 16:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:12.835 16:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:12.835 16:54:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1286534 0 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286534 0 idle 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286534 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286534 -w 256 00:41:14.740 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286534 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.47 reactor_0' 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286534 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.47 reactor_0 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1286534 1 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286534 1 idle 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286534 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286534 -w 256 00:41:15.000 16:54:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:15.259 16:54:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286580 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.11 reactor_1' 00:41:15.259 16:54:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286580 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.11 reactor_1 00:41:15.259 16:54:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:15.259 16:54:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:15.259 16:54:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:15.259 16:54:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:15.259 16:54:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:15.259 16:54:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:15.259 16:54:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:15.259 16:54:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:15.259 16:54:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:15.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:15.518 rmmod nvme_tcp 00:41:15.518 rmmod nvme_fabrics 00:41:15.518 rmmod nvme_keyring 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1286534 ']' 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1286534 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1286534 ']' 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1286534 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1286534 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1286534' 00:41:15.518 killing process with pid 1286534 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1286534 00:41:15.518 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1286534 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:15.777 16:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:18.311 16:54:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:18.311 00:41:18.311 real 0m22.943s 00:41:18.311 user 0m39.854s 00:41:18.311 sys 0m8.222s 00:41:18.311 16:54:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.311 16:54:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:18.311 ************************************ 00:41:18.311 END TEST nvmf_interrupt 00:41:18.311 ************************************ 00:41:18.311 00:41:18.311 real 35m22.012s 00:41:18.311 user 86m5.946s 00:41:18.311 sys 10m27.191s 00:41:18.311 16:54:47 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.311 16:54:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:18.311 ************************************ 00:41:18.311 END TEST nvmf_tcp 00:41:18.311 ************************************ 00:41:18.311 16:54:47 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:18.311 16:54:47 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:18.311 16:54:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:18.311 16:54:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.311 16:54:47 -- common/autotest_common.sh@10 -- # set +x 00:41:18.311 ************************************ 00:41:18.311 START TEST spdkcli_nvmf_tcp 00:41:18.311 ************************************ 00:41:18.311 16:54:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:18.311 * Looking for test storage... 00:41:18.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:18.311 16:54:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:18.311 16:54:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:18.311 16:54:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:18.311 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:18.311 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:18.311 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:18.311 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:18.311 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:18.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.312 --rc genhtml_branch_coverage=1 00:41:18.312 --rc genhtml_function_coverage=1 00:41:18.312 --rc genhtml_legend=1 00:41:18.312 --rc geninfo_all_blocks=1 00:41:18.312 --rc geninfo_unexecuted_blocks=1 00:41:18.312 00:41:18.312 ' 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:18.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.312 --rc genhtml_branch_coverage=1 00:41:18.312 --rc genhtml_function_coverage=1 00:41:18.312 --rc genhtml_legend=1 00:41:18.312 --rc geninfo_all_blocks=1 00:41:18.312 --rc geninfo_unexecuted_blocks=1 00:41:18.312 00:41:18.312 ' 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:18.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.312 --rc genhtml_branch_coverage=1 00:41:18.312 --rc genhtml_function_coverage=1 00:41:18.312 --rc genhtml_legend=1 00:41:18.312 --rc geninfo_all_blocks=1 00:41:18.312 --rc geninfo_unexecuted_blocks=1 00:41:18.312 00:41:18.312 ' 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:18.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.312 --rc genhtml_branch_coverage=1 00:41:18.312 --rc genhtml_function_coverage=1 00:41:18.312 --rc genhtml_legend=1 00:41:18.312 --rc geninfo_all_blocks=1 00:41:18.312 --rc geninfo_unexecuted_blocks=1 00:41:18.312 00:41:18.312 ' 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:18.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1289316 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1289316 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1289316 ']' 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:18.312 [2024-12-14 16:54:48.160189] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:18.312 [2024-12-14 16:54:48.160237] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289316 ] 00:41:18.312 [2024-12-14 16:54:48.232743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:18.312 [2024-12-14 16:54:48.256900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:18.312 [2024-12-14 16:54:48.256902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:18.312 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:18.313 16:54:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:18.313 16:54:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:18.313 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:18.313 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:18.313 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:18.313 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:18.313 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:18.313 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:18.313 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:18.313 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:18.313 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:18.313 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:18.313 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:18.313 ' 00:41:21.600 [2024-12-14 16:54:51.086399] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:22.534 [2024-12-14 16:54:52.426723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:25.068 [2024-12-14 16:54:54.910251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:27.602 [2024-12-14 16:54:57.060936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:28.979 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:28.979 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:28.979 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:28.979 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:28.979 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:28.979 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:28.979 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:28.979 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:28.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:28.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:28.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:28.980 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:28.980 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:28.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:28.980 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:28.980 16:54:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:28.980 16:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:28.980 16:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:28.980 16:54:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:28.980 16:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:28.980 16:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:28.980 16:54:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:28.980 16:54:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:29.239 16:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:29.239 16:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:29.239 16:54:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:29.239 16:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:29.239 16:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:29.497 16:54:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:29.497 16:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:29.497 16:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:29.497 16:54:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:29.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:29.497 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:29.497 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:29.497 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:29.497 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:29.497 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:29.497 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:29.497 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:29.497 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:29.497 ' 00:41:34.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:34.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:34.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:34.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:34.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:34.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:34.769 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:34.769 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:34.769 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:34.769 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:34.769 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:34.769 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:34.769 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:34.769 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:35.028 16:55:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:35.029 16:55:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:35.029 16:55:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:35.029 16:55:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1289316 00:41:35.029 16:55:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1289316 ']' 00:41:35.029 16:55:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1289316 00:41:35.029 16:55:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:41:35.029 16:55:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:35.029 16:55:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1289316 00:41:35.029 16:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:35.029 16:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:35.029 16:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1289316' 00:41:35.029 killing process with pid 1289316 00:41:35.029 16:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1289316 00:41:35.029 16:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1289316 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1289316 ']' 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1289316 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1289316 ']' 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1289316 00:41:35.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1289316) - No such process 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1289316 is not found' 00:41:35.287 Process with pid 1289316 is not found 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:35.287 00:41:35.287 real 0m17.289s 00:41:35.287 user 0m38.161s 00:41:35.287 sys 0m0.776s 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:35.287 16:55:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:35.287 ************************************ 00:41:35.287 END TEST spdkcli_nvmf_tcp 00:41:35.287 ************************************ 00:41:35.287 16:55:05 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:35.287 16:55:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:35.287 16:55:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:35.287 16:55:05 -- common/autotest_common.sh@10 -- # set +x 00:41:35.287 ************************************ 00:41:35.287 START TEST nvmf_identify_passthru 00:41:35.287 ************************************ 00:41:35.287 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:35.287 * Looking for test storage... 00:41:35.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:35.287 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:35.287 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:41:35.287 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:35.546 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:35.546 16:55:05 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:35.546 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:35.546 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:35.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:35.546 --rc genhtml_branch_coverage=1 00:41:35.546 --rc genhtml_function_coverage=1 00:41:35.546 --rc genhtml_legend=1 00:41:35.546 --rc geninfo_all_blocks=1 00:41:35.546 --rc geninfo_unexecuted_blocks=1 00:41:35.546 00:41:35.546 ' 00:41:35.546 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:35.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:35.546 --rc genhtml_branch_coverage=1 00:41:35.546 --rc genhtml_function_coverage=1 00:41:35.546 --rc genhtml_legend=1 00:41:35.546 --rc geninfo_all_blocks=1 00:41:35.546 --rc geninfo_unexecuted_blocks=1 00:41:35.546 00:41:35.546 ' 00:41:35.546 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:35.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:35.546 --rc genhtml_branch_coverage=1 00:41:35.546 --rc genhtml_function_coverage=1 00:41:35.546 --rc genhtml_legend=1 00:41:35.546 --rc geninfo_all_blocks=1 00:41:35.546 --rc geninfo_unexecuted_blocks=1 00:41:35.546 00:41:35.546 ' 00:41:35.546 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:35.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:35.546 --rc genhtml_branch_coverage=1 00:41:35.546 --rc genhtml_function_coverage=1 00:41:35.546 --rc genhtml_legend=1 00:41:35.546 --rc geninfo_all_blocks=1 00:41:35.546 --rc geninfo_unexecuted_blocks=1 00:41:35.546 00:41:35.546 ' 00:41:35.546 16:55:05 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:35.546 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:35.547 16:55:05 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:35.547 16:55:05 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:35.547 16:55:05 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:35.547 16:55:05 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:35.547 16:55:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.547 16:55:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.547 16:55:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.547 16:55:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:35.547 16:55:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:35.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:35.547 16:55:05 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:35.547 16:55:05 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:35.547 16:55:05 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:35.547 16:55:05 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:35.547 16:55:05 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:35.547 16:55:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.547 16:55:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.547 16:55:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.547 16:55:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:35.547 16:55:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:35.547 16:55:05 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:35.547 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:35.547 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:35.547 16:55:05 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:41:35.547 16:55:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:42.118 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:42.118 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:41:42.118 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:42.118 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:42.118 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:42.118 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:42.118 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:42.119 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:42.119 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:42.119 Found net devices under 0000:af:00.0: cvl_0_0 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:42.119 Found net devices under 0000:af:00.1: cvl_0_1 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:42.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:42.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:41:42.119 00:41:42.119 --- 10.0.0.2 ping statistics --- 00:41:42.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.119 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:42.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:42.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:41:42.119 00:41:42.119 --- 10.0.0.1 ping statistics --- 00:41:42.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.119 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:42.119 16:55:11 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:42.119 16:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:42.119 16:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:41:42.119 16:55:11 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:41:42.120 16:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:41:42.120 16:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:41:42.120 16:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:42.120 16:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:42.120 16:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:46.310 16:55:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:41:46.310 16:55:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:46.310 16:55:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:46.310 16:55:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:49.599 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:49.599 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:49.599 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:49.599 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:49.599 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:49.599 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:49.599 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:49.599 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1296409 00:41:49.599 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:49.599 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:49.599 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1296409 00:41:49.599 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1296409 ']' 00:41:49.599 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:49.599 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:49.599 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:49.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:49.599 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:49.599 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:49.858 [2024-12-14 16:55:19.713218] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:49.858 [2024-12-14 16:55:19.713264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:49.858 [2024-12-14 16:55:19.790996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:49.858 [2024-12-14 16:55:19.814490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:49.858 [2024-12-14 16:55:19.814530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:49.858 [2024-12-14 16:55:19.814537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:49.858 [2024-12-14 16:55:19.814543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:49.858 [2024-12-14 16:55:19.814547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:49.858 [2024-12-14 16:55:19.815838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:49.858 [2024-12-14 16:55:19.815947] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:49.858 [2024-12-14 16:55:19.816057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:49.858 [2024-12-14 16:55:19.816058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:49.858 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:49.858 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:41:49.858 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:49.858 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.858 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:49.858 INFO: Log level set to 20 00:41:49.858 INFO: Requests: 00:41:49.858 { 00:41:49.858 "jsonrpc": "2.0", 00:41:49.858 "method": "nvmf_set_config", 00:41:49.858 "id": 1, 00:41:49.858 "params": { 00:41:49.858 "admin_cmd_passthru": { 00:41:49.858 "identify_ctrlr": true 00:41:49.858 } 00:41:49.858 } 00:41:49.858 } 00:41:49.858 00:41:49.858 INFO: response: 00:41:49.858 { 00:41:49.858 "jsonrpc": "2.0", 00:41:49.858 "id": 1, 00:41:49.858 "result": true 00:41:49.858 } 00:41:49.858 00:41:49.858 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.858 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:49.858 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.858 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:49.858 INFO: Setting log level to 20 00:41:49.858 INFO: Setting log level to 20 00:41:49.858 INFO: Log level set to 20 00:41:49.858 INFO: Log level set to 20 00:41:49.858 INFO: Requests: 00:41:49.858 { 00:41:49.858 "jsonrpc": "2.0", 00:41:49.858 "method": "framework_start_init", 00:41:49.858 "id": 1 00:41:49.858 } 00:41:49.858 00:41:49.858 INFO: Requests: 00:41:49.858 { 00:41:49.858 "jsonrpc": "2.0", 00:41:49.858 "method": "framework_start_init", 00:41:49.858 "id": 1 00:41:49.858 } 00:41:49.858 00:41:49.858 [2024-12-14 16:55:19.939419] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:50.117 INFO: response: 00:41:50.117 { 00:41:50.117 "jsonrpc": "2.0", 00:41:50.117 "id": 1, 00:41:50.117 "result": true 00:41:50.117 } 00:41:50.117 00:41:50.117 INFO: response: 00:41:50.117 { 00:41:50.117 "jsonrpc": "2.0", 00:41:50.117 "id": 1, 00:41:50.117 "result": true 00:41:50.117 } 00:41:50.117 00:41:50.117 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.117 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:50.117 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.117 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:50.117 INFO: Setting log level to 40 00:41:50.117 INFO: Setting log level to 40 00:41:50.117 INFO: Setting log level to 40 00:41:50.117 [2024-12-14 16:55:19.948693] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:50.117 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.117 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:50.117 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:50.117 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:50.117 16:55:19 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:41:50.117 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.117 16:55:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:52.838 Nvme0n1 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.838 16:55:22 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.838 16:55:22 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.838 16:55:22 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:52.838 [2024-12-14 16:55:22.853890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.838 16:55:22 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:52.838 [ 00:41:52.838 { 00:41:52.838 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:52.838 "subtype": "Discovery", 00:41:52.838 "listen_addresses": [], 00:41:52.838 "allow_any_host": true, 00:41:52.838 "hosts": [] 00:41:52.838 }, 00:41:52.838 { 00:41:52.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:52.838 "subtype": "NVMe", 00:41:52.838 "listen_addresses": [ 00:41:52.838 { 00:41:52.838 "trtype": "TCP", 00:41:52.838 "adrfam": "IPv4", 00:41:52.838 "traddr": "10.0.0.2", 00:41:52.838 "trsvcid": "4420" 00:41:52.838 } 00:41:52.838 ], 00:41:52.838 "allow_any_host": true, 00:41:52.838 "hosts": [], 00:41:52.838 "serial_number": "SPDK00000000000001", 00:41:52.838 "model_number": "SPDK bdev Controller", 00:41:52.838 "max_namespaces": 1, 00:41:52.838 "min_cntlid": 1, 00:41:52.838 "max_cntlid": 65519, 00:41:52.838 "namespaces": [ 00:41:52.838 { 00:41:52.838 "nsid": 1, 00:41:52.838 "bdev_name": "Nvme0n1", 00:41:52.838 "name": "Nvme0n1", 00:41:52.838 "nguid": "A0895FCEC3F74BB3A2FDD17C8DD4F752", 00:41:52.838 "uuid": "a0895fce-c3f7-4bb3-a2fd-d17c8dd4f752" 00:41:52.838 } 00:41:52.838 ] 00:41:52.838 } 00:41:52.838 ] 00:41:52.838 16:55:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.838 16:55:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:52.838 16:55:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:52.838 16:55:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:53.096 16:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:41:53.096 16:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:53.096 16:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:53.096 16:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:53.354 16:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:53.354 16:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:41:53.354 16:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:53.354 16:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:53.354 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.354 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:53.354 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.354 16:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:53.354 16:55:23 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:53.354 16:55:23 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:53.354 16:55:23 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:53.354 16:55:23 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:53.354 16:55:23 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:53.354 16:55:23 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:53.354 16:55:23 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:53.354 rmmod nvme_tcp 00:41:53.354 rmmod nvme_fabrics 00:41:53.354 rmmod nvme_keyring 00:41:53.354 16:55:23 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:53.354 16:55:23 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:53.354 16:55:23 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:53.354 16:55:23 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1296409 ']' 00:41:53.354 16:55:23 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1296409 00:41:53.354 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1296409 ']' 00:41:53.354 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1296409 00:41:53.354 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:41:53.354 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:53.354 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1296409 00:41:53.612 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:53.612 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:53.612 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1296409' 00:41:53.612 killing process with pid 1296409 00:41:53.612 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1296409 00:41:53.612 16:55:23 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1296409 00:41:54.986 16:55:24 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:54.986 16:55:24 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:54.986 16:55:24 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:54.986 16:55:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:54.986 16:55:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:41:54.986 16:55:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:54.986 16:55:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:41:54.986 16:55:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:54.986 16:55:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:54.986 16:55:24 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:54.986 16:55:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:54.986 16:55:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:56.887 16:55:26 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:56.887 00:41:56.887 real 0m21.700s 00:41:56.887 user 0m27.764s 00:41:56.887 sys 0m5.190s 00:41:56.887 16:55:26 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:56.887 16:55:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:56.887 ************************************ 00:41:56.887 END TEST nvmf_identify_passthru 00:41:56.887 ************************************ 00:41:57.145 16:55:26 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:57.145 16:55:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:57.145 16:55:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:57.145 16:55:26 -- common/autotest_common.sh@10 -- # set +x 00:41:57.145 ************************************ 00:41:57.145 START TEST nvmf_dif 00:41:57.145 ************************************ 00:41:57.145 16:55:27 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:57.145 * Looking for test storage... 00:41:57.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:57.145 16:55:27 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:57.145 16:55:27 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:41:57.145 16:55:27 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:57.145 16:55:27 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:57.145 16:55:27 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:57.146 16:55:27 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:57.146 16:55:27 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:57.146 16:55:27 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:57.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.146 --rc genhtml_branch_coverage=1 00:41:57.146 --rc genhtml_function_coverage=1 00:41:57.146 --rc genhtml_legend=1 00:41:57.146 --rc geninfo_all_blocks=1 00:41:57.146 --rc geninfo_unexecuted_blocks=1 00:41:57.146 00:41:57.146 ' 00:41:57.146 16:55:27 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:57.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.146 --rc genhtml_branch_coverage=1 00:41:57.146 --rc genhtml_function_coverage=1 00:41:57.146 --rc genhtml_legend=1 00:41:57.146 --rc geninfo_all_blocks=1 00:41:57.146 --rc geninfo_unexecuted_blocks=1 00:41:57.146 00:41:57.146 ' 00:41:57.146 16:55:27 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:57.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.146 --rc genhtml_branch_coverage=1 00:41:57.146 --rc genhtml_function_coverage=1 00:41:57.146 --rc genhtml_legend=1 00:41:57.146 --rc geninfo_all_blocks=1 00:41:57.146 --rc geninfo_unexecuted_blocks=1 00:41:57.146 00:41:57.146 ' 00:41:57.146 16:55:27 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:57.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.146 --rc genhtml_branch_coverage=1 00:41:57.146 --rc genhtml_function_coverage=1 00:41:57.146 --rc genhtml_legend=1 00:41:57.146 --rc geninfo_all_blocks=1 00:41:57.146 --rc geninfo_unexecuted_blocks=1 00:41:57.146 00:41:57.146 ' 00:41:57.146 16:55:27 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:57.146 16:55:27 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:57.405 16:55:27 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:57.405 16:55:27 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:57.405 16:55:27 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:57.405 16:55:27 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:57.405 16:55:27 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.405 16:55:27 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.405 16:55:27 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.405 16:55:27 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:57.405 16:55:27 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:57.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:57.405 16:55:27 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:57.405 16:55:27 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:57.405 16:55:27 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:57.406 16:55:27 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:57.406 16:55:27 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:57.406 16:55:27 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:57.406 16:55:27 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:57.406 16:55:27 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:57.406 16:55:27 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:57.406 16:55:27 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:57.406 16:55:27 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:57.406 16:55:27 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:57.406 16:55:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:57.406 16:55:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:57.406 16:55:27 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:57.406 16:55:27 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:57.406 16:55:27 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:57.406 16:55:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:03.978 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:03.978 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:03.978 Found net devices under 0000:af:00.0: cvl_0_0 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:03.978 Found net devices under 0000:af:00.1: cvl_0_1 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:03.978 16:55:32 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:03.978 16:55:33 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:03.978 16:55:33 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:03.978 16:55:33 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:03.978 16:55:33 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:03.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:03.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:42:03.978 00:42:03.978 --- 10.0.0.2 ping statistics --- 00:42:03.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:03.978 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:42:03.978 16:55:33 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:03.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:03.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:42:03.978 00:42:03.978 --- 10.0.0.1 ping statistics --- 00:42:03.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:03.978 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:42:03.978 16:55:33 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:03.978 16:55:33 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:03.978 16:55:33 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:03.978 16:55:33 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:05.884 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:05.884 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:05.884 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:05.884 16:55:35 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:05.884 16:55:35 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:05.884 16:55:35 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:05.884 16:55:35 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:05.884 16:55:35 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:05.884 16:55:35 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:05.884 16:55:35 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:05.884 16:55:35 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:05.884 16:55:35 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:05.884 16:55:35 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:05.884 16:55:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:05.884 16:55:35 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1301784 00:42:05.884 16:55:35 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1301784 00:42:05.884 16:55:35 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:05.884 16:55:35 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1301784 ']' 00:42:05.884 16:55:35 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:05.884 16:55:35 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:05.884 16:55:35 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:05.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:05.884 16:55:35 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:05.884 16:55:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:05.884 [2024-12-14 16:55:35.956902] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:42:05.884 [2024-12-14 16:55:35.956945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:06.143 [2024-12-14 16:55:36.036575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:06.143 [2024-12-14 16:55:36.057773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:06.143 [2024-12-14 16:55:36.057809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:06.143 [2024-12-14 16:55:36.057817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:06.143 [2024-12-14 16:55:36.057823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:06.143 [2024-12-14 16:55:36.057828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:06.143 [2024-12-14 16:55:36.058304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:06.143 16:55:36 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:06.143 16:55:36 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:06.143 16:55:36 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:06.143 16:55:36 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:06.143 16:55:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:06.143 16:55:36 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:06.143 16:55:36 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:06.143 16:55:36 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:06.143 16:55:36 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.143 16:55:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:06.143 [2024-12-14 16:55:36.188736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:06.143 16:55:36 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.143 16:55:36 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:06.143 16:55:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:06.143 16:55:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:06.143 16:55:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:06.143 ************************************ 00:42:06.143 START TEST fio_dif_1_default 00:42:06.143 ************************************ 00:42:06.143 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:06.143 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:06.402 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:06.402 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:06.402 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:06.402 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:06.403 bdev_null0 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:06.403 [2024-12-14 16:55:36.273070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:06.403 { 00:42:06.403 "params": { 00:42:06.403 "name": "Nvme$subsystem", 00:42:06.403 "trtype": "$TEST_TRANSPORT", 00:42:06.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:06.403 "adrfam": "ipv4", 00:42:06.403 "trsvcid": "$NVMF_PORT", 00:42:06.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:06.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:06.403 "hdgst": ${hdgst:-false}, 00:42:06.403 "ddgst": ${ddgst:-false} 00:42:06.403 }, 00:42:06.403 "method": "bdev_nvme_attach_controller" 00:42:06.403 } 00:42:06.403 EOF 00:42:06.403 )") 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:06.403 "params": { 00:42:06.403 "name": "Nvme0", 00:42:06.403 "trtype": "tcp", 00:42:06.403 "traddr": "10.0.0.2", 00:42:06.403 "adrfam": "ipv4", 00:42:06.403 "trsvcid": "4420", 00:42:06.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:06.403 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:06.403 "hdgst": false, 00:42:06.403 "ddgst": false 00:42:06.403 }, 00:42:06.403 "method": "bdev_nvme_attach_controller" 00:42:06.403 }' 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:06.403 16:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:06.662 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:06.662 fio-3.35 00:42:06.662 Starting 1 thread 00:42:18.871 00:42:18.871 filename0: (groupid=0, jobs=1): err= 0: pid=1302151: Sat Dec 14 16:55:47 2024 00:42:18.871 read: IOPS=194, BW=777KiB/s (795kB/s)(7776KiB/10012msec) 00:42:18.871 slat (nsec): min=5913, max=45811, avg=6407.40, stdev=1456.73 00:42:18.871 clat (usec): min=369, max=44901, avg=20583.22, stdev=20514.29 00:42:18.871 lat (usec): min=375, max=44927, avg=20589.63, stdev=20514.25 00:42:18.871 clat percentiles (usec): 00:42:18.871 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 400], 00:42:18.871 | 30.00th=[ 408], 40.00th=[ 416], 50.00th=[ 594], 60.00th=[40633], 00:42:18.871 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:42:18.871 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:42:18.871 | 99.99th=[44827] 00:42:18.871 bw ( KiB/s): min= 672, max= 1024, per=99.91%, avg=776.00, stdev=71.08, samples=20 00:42:18.871 iops : min= 168, max= 256, avg=194.00, stdev=17.77, samples=20 00:42:18.871 lat (usec) : 500=48.30%, 750=2.52% 00:42:18.871 lat (msec) : 50=49.18% 00:42:18.871 cpu : usr=91.97%, sys=7.78%, ctx=14, majf=0, minf=0 00:42:18.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:18.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:18.871 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:18.871 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:18.871 00:42:18.871 Run status group 0 (all jobs): 00:42:18.871 READ: bw=777KiB/s (795kB/s), 777KiB/s-777KiB/s (795kB/s-795kB/s), io=7776KiB (7963kB), run=10012-10012msec 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.871 00:42:18.871 real 0m11.206s 00:42:18.871 user 0m15.819s 00:42:18.871 sys 0m1.080s 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:18.871 16:55:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:18.871 ************************************ 00:42:18.872 END TEST fio_dif_1_default 00:42:18.872 ************************************ 00:42:18.872 16:55:47 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:18.872 16:55:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:18.872 16:55:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:18.872 16:55:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:18.872 ************************************ 00:42:18.872 START TEST fio_dif_1_multi_subsystems 00:42:18.872 ************************************ 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:18.872 bdev_null0 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:18.872 [2024-12-14 16:55:47.544890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:18.872 bdev_null1 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:18.872 { 00:42:18.872 "params": { 00:42:18.872 "name": "Nvme$subsystem", 00:42:18.872 "trtype": "$TEST_TRANSPORT", 00:42:18.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:18.872 "adrfam": "ipv4", 00:42:18.872 "trsvcid": "$NVMF_PORT", 00:42:18.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:18.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:18.872 "hdgst": ${hdgst:-false}, 00:42:18.872 "ddgst": ${ddgst:-false} 00:42:18.872 }, 00:42:18.872 "method": "bdev_nvme_attach_controller" 00:42:18.872 } 00:42:18.872 EOF 00:42:18.872 )") 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:18.872 { 00:42:18.872 "params": { 00:42:18.872 "name": "Nvme$subsystem", 00:42:18.872 "trtype": "$TEST_TRANSPORT", 00:42:18.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:18.872 "adrfam": "ipv4", 00:42:18.872 "trsvcid": "$NVMF_PORT", 00:42:18.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:18.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:18.872 "hdgst": ${hdgst:-false}, 00:42:18.872 "ddgst": ${ddgst:-false} 00:42:18.872 }, 00:42:18.872 "method": "bdev_nvme_attach_controller" 00:42:18.872 } 00:42:18.872 EOF 00:42:18.872 )") 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:18.872 "params": { 00:42:18.872 "name": "Nvme0", 00:42:18.872 "trtype": "tcp", 00:42:18.872 "traddr": "10.0.0.2", 00:42:18.872 "adrfam": "ipv4", 00:42:18.872 "trsvcid": "4420", 00:42:18.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:18.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:18.872 "hdgst": false, 00:42:18.872 "ddgst": false 00:42:18.872 }, 00:42:18.872 "method": "bdev_nvme_attach_controller" 00:42:18.872 },{ 00:42:18.872 "params": { 00:42:18.872 "name": "Nvme1", 00:42:18.872 "trtype": "tcp", 00:42:18.872 "traddr": "10.0.0.2", 00:42:18.872 "adrfam": "ipv4", 00:42:18.872 "trsvcid": "4420", 00:42:18.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:18.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:18.872 "hdgst": false, 00:42:18.872 "ddgst": false 00:42:18.872 }, 00:42:18.872 "method": "bdev_nvme_attach_controller" 00:42:18.872 }' 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:18.872 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:18.873 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:18.873 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:18.873 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:18.873 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:18.873 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:18.873 16:55:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:18.873 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:18.873 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:18.873 fio-3.35 00:42:18.873 Starting 2 threads 00:42:28.847 00:42:28.847 filename0: (groupid=0, jobs=1): err= 0: pid=1304069: Sat Dec 14 16:55:58 2024 00:42:28.847 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10016msec) 00:42:28.847 slat (nsec): min=6164, max=38470, avg=11778.28, stdev=8858.79 00:42:28.847 clat (usec): min=388, max=42409, avg=41347.45, stdev=3764.71 00:42:28.847 lat (usec): min=394, max=42439, avg=41359.23, stdev=3764.86 00:42:28.847 clat percentiles (usec): 00:42:28.847 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:28.847 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:42:28.847 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:28.847 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:28.847 | 99.99th=[42206] 00:42:28.848 bw ( KiB/s): min= 352, max= 448, per=29.79%, avg=385.60, stdev=16.33, samples=20 00:42:28.848 iops : min= 88, max= 112, avg=96.40, stdev= 4.08, samples=20 00:42:28.848 lat (usec) : 500=0.83% 00:42:28.848 lat (msec) : 50=99.17% 00:42:28.848 cpu : usr=98.44%, sys=1.28%, ctx=23, majf=0, minf=179 00:42:28.848 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:28.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:28.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:28.848 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:28.848 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:28.848 filename1: (groupid=0, jobs=1): err= 0: pid=1304070: Sat Dec 14 16:55:58 2024 00:42:28.848 read: IOPS=226, BW=906KiB/s (928kB/s)(9072KiB/10011msec) 00:42:28.848 slat (nsec): min=5944, max=62744, avg=8568.53, stdev=5208.32 00:42:28.848 clat (usec): min=374, max=42503, avg=17629.69, stdev=20205.97 00:42:28.848 lat (usec): min=381, max=42510, avg=17638.26, stdev=20204.69 00:42:28.848 clat percentiles (usec): 00:42:28.848 | 1.00th=[ 383], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 420], 00:42:28.848 | 30.00th=[ 433], 40.00th=[ 449], 50.00th=[ 523], 60.00th=[40633], 00:42:28.848 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:42:28.848 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:28.848 | 99.99th=[42730] 00:42:28.848 bw ( KiB/s): min= 768, max= 1152, per=70.03%, avg=905.60, stdev=117.96, samples=20 00:42:28.848 iops : min= 192, max= 288, avg=226.40, stdev=29.49, samples=20 00:42:28.848 lat (usec) : 500=48.63%, 750=9.22% 00:42:28.848 lat (msec) : 2=0.18%, 50=41.98% 00:42:28.848 cpu : usr=98.20%, sys=1.51%, ctx=34, majf=0, minf=113 00:42:28.848 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:28.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:28.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:28.848 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:28.848 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:28.848 00:42:28.848 Run status group 0 (all jobs): 00:42:28.848 READ: bw=1292KiB/s (1323kB/s), 387KiB/s-906KiB/s (396kB/s-928kB/s), io=12.6MiB (13.3MB), run=10011-10016msec 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.848 00:42:28.848 real 0m11.422s 00:42:28.848 user 0m27.107s 00:42:28.848 sys 0m0.657s 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:28.848 16:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:28.848 ************************************ 00:42:28.848 END TEST fio_dif_1_multi_subsystems 00:42:28.848 ************************************ 00:42:29.107 16:55:58 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:29.107 16:55:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:29.107 16:55:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:29.107 16:55:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:29.107 ************************************ 00:42:29.107 START TEST fio_dif_rand_params 00:42:29.107 ************************************ 00:42:29.107 16:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:42:29.107 16:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:29.107 16:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:29.107 16:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.107 bdev_null0 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.107 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:29.107 [2024-12-14 16:55:59.038172] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:29.108 { 00:42:29.108 "params": { 00:42:29.108 "name": "Nvme$subsystem", 00:42:29.108 "trtype": "$TEST_TRANSPORT", 00:42:29.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:29.108 "adrfam": "ipv4", 00:42:29.108 "trsvcid": "$NVMF_PORT", 00:42:29.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:29.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:29.108 "hdgst": ${hdgst:-false}, 00:42:29.108 "ddgst": ${ddgst:-false} 00:42:29.108 }, 00:42:29.108 "method": "bdev_nvme_attach_controller" 00:42:29.108 } 00:42:29.108 EOF 00:42:29.108 )") 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:29.108 "params": { 00:42:29.108 "name": "Nvme0", 00:42:29.108 "trtype": "tcp", 00:42:29.108 "traddr": "10.0.0.2", 00:42:29.108 "adrfam": "ipv4", 00:42:29.108 "trsvcid": "4420", 00:42:29.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:29.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:29.108 "hdgst": false, 00:42:29.108 "ddgst": false 00:42:29.108 }, 00:42:29.108 "method": "bdev_nvme_attach_controller" 00:42:29.108 }' 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:29.108 16:55:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:29.367 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:29.367 ... 00:42:29.367 fio-3.35 00:42:29.367 Starting 3 threads 00:42:35.929 00:42:35.929 filename0: (groupid=0, jobs=1): err= 0: pid=1305978: Sat Dec 14 16:56:04 2024 00:42:35.929 read: IOPS=305, BW=38.2MiB/s (40.1MB/s)(191MiB/5005msec) 00:42:35.929 slat (nsec): min=6264, max=67440, avg=13700.94, stdev=5318.69 00:42:35.929 clat (usec): min=3455, max=52747, avg=9795.52, stdev=8114.71 00:42:35.929 lat (usec): min=3461, max=52758, avg=9809.22, stdev=8114.63 00:42:35.929 clat percentiles (usec): 00:42:35.929 | 1.00th=[ 3818], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6718], 00:42:35.929 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:42:35.929 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[11076], 00:42:35.929 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[52691], 00:42:35.929 | 99.99th=[52691] 00:42:35.929 bw ( KiB/s): min=17664, max=50688, per=33.25%, avg=39116.80, stdev=9945.49, samples=10 00:42:35.929 iops : min= 138, max= 396, avg=305.60, stdev=77.70, samples=10 00:42:35.929 lat (msec) : 4=1.44%, 10=84.18%, 20=10.26%, 50=3.59%, 100=0.52% 00:42:35.929 cpu : usr=96.22%, sys=3.46%, ctx=12, majf=0, minf=50 00:42:35.929 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:35.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.929 issued rwts: total=1530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:35.929 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:35.929 filename0: (groupid=0, jobs=1): err= 0: pid=1305979: Sat Dec 14 16:56:04 2024 00:42:35.929 read: IOPS=292, BW=36.6MiB/s (38.4MB/s)(183MiB/5003msec) 00:42:35.929 slat (nsec): min=6180, max=38757, avg=11626.59, stdev=4530.57 00:42:35.929 clat (usec): min=2938, max=51193, avg=10229.72, stdev=7264.93 00:42:35.929 lat (usec): min=2944, max=51204, avg=10241.35, stdev=7265.19 00:42:35.929 clat percentiles (usec): 00:42:35.929 | 1.00th=[ 3556], 5.00th=[ 4178], 10.00th=[ 5735], 20.00th=[ 6652], 00:42:35.929 | 30.00th=[ 7570], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10159], 00:42:35.929 | 70.00th=[10814], 80.00th=[11338], 90.00th=[12125], 95.00th=[13173], 00:42:35.929 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50070], 99.95th=[51119], 00:42:35.929 | 99.99th=[51119] 00:42:35.929 bw ( KiB/s): min=20736, max=46336, per=31.84%, avg=37452.80, stdev=7689.52, samples=10 00:42:35.929 iops : min= 162, max= 362, avg=292.60, stdev=60.07, samples=10 00:42:35.929 lat (msec) : 4=3.82%, 10=52.83%, 20=40.07%, 50=3.07%, 100=0.20% 00:42:35.929 cpu : usr=96.24%, sys=3.44%, ctx=6, majf=0, minf=76 00:42:35.929 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:35.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.929 issued rwts: total=1465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:35.929 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:35.929 filename0: (groupid=0, jobs=1): err= 0: pid=1305980: Sat Dec 14 16:56:04 2024 00:42:35.929 read: IOPS=320, BW=40.1MiB/s (42.0MB/s)(201MiB/5004msec) 00:42:35.929 slat (nsec): min=6151, max=37108, avg=11776.61, stdev=5012.51 00:42:35.929 clat (usec): min=2819, max=92242, avg=9338.38, stdev=8339.68 00:42:35.929 lat (usec): min=2826, max=92255, avg=9350.15, stdev=8339.83 00:42:35.929 clat percentiles (usec): 00:42:35.929 | 1.00th=[ 3261], 5.00th=[ 3785], 10.00th=[ 5145], 20.00th=[ 6259], 00:42:35.929 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8225], 60.00th=[ 8455], 00:42:35.929 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[10945], 00:42:35.929 | 99.00th=[50594], 99.50th=[51643], 99.90th=[52167], 99.95th=[91751], 00:42:35.929 | 99.99th=[91751] 00:42:35.929 bw ( KiB/s): min=29696, max=57088, per=34.86%, avg=41011.20, stdev=9323.46, samples=10 00:42:35.929 iops : min= 232, max= 446, avg=320.40, stdev=72.84, samples=10 00:42:35.929 lat (msec) : 4=6.04%, 10=84.55%, 20=5.55%, 50=2.12%, 100=1.74% 00:42:35.929 cpu : usr=96.30%, sys=3.38%, ctx=8, majf=0, minf=46 00:42:35.929 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:35.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.929 issued rwts: total=1605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:35.929 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:35.929 00:42:35.929 Run status group 0 (all jobs): 00:42:35.929 READ: bw=115MiB/s (120MB/s), 36.6MiB/s-40.1MiB/s (38.4MB/s-42.0MB/s), io=575MiB (603MB), run=5003-5005msec 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:35.929 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 bdev_null0 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 [2024-12-14 16:56:05.252065] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 bdev_null1 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 bdev_null2 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:35.930 { 00:42:35.930 "params": { 00:42:35.930 "name": "Nvme$subsystem", 00:42:35.930 "trtype": "$TEST_TRANSPORT", 00:42:35.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:35.930 "adrfam": "ipv4", 00:42:35.930 "trsvcid": "$NVMF_PORT", 00:42:35.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:35.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:35.930 "hdgst": ${hdgst:-false}, 00:42:35.930 "ddgst": ${ddgst:-false} 00:42:35.930 }, 00:42:35.930 "method": "bdev_nvme_attach_controller" 00:42:35.930 } 00:42:35.930 EOF 00:42:35.930 )") 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:35.930 { 00:42:35.930 "params": { 00:42:35.930 "name": "Nvme$subsystem", 00:42:35.930 "trtype": "$TEST_TRANSPORT", 00:42:35.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:35.930 "adrfam": "ipv4", 00:42:35.930 "trsvcid": "$NVMF_PORT", 00:42:35.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:35.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:35.930 "hdgst": ${hdgst:-false}, 00:42:35.930 "ddgst": ${ddgst:-false} 00:42:35.930 }, 00:42:35.930 "method": "bdev_nvme_attach_controller" 00:42:35.930 } 00:42:35.930 EOF 00:42:35.930 )") 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:35.930 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:35.930 { 00:42:35.930 "params": { 00:42:35.930 "name": "Nvme$subsystem", 00:42:35.930 "trtype": "$TEST_TRANSPORT", 00:42:35.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:35.930 "adrfam": "ipv4", 00:42:35.930 "trsvcid": "$NVMF_PORT", 00:42:35.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:35.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:35.930 "hdgst": ${hdgst:-false}, 00:42:35.930 "ddgst": ${ddgst:-false} 00:42:35.931 }, 00:42:35.931 "method": "bdev_nvme_attach_controller" 00:42:35.931 } 00:42:35.931 EOF 00:42:35.931 )") 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:35.931 "params": { 00:42:35.931 "name": "Nvme0", 00:42:35.931 "trtype": "tcp", 00:42:35.931 "traddr": "10.0.0.2", 00:42:35.931 "adrfam": "ipv4", 00:42:35.931 "trsvcid": "4420", 00:42:35.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:35.931 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:35.931 "hdgst": false, 00:42:35.931 "ddgst": false 00:42:35.931 }, 00:42:35.931 "method": "bdev_nvme_attach_controller" 00:42:35.931 },{ 00:42:35.931 "params": { 00:42:35.931 "name": "Nvme1", 00:42:35.931 "trtype": "tcp", 00:42:35.931 "traddr": "10.0.0.2", 00:42:35.931 "adrfam": "ipv4", 00:42:35.931 "trsvcid": "4420", 00:42:35.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:35.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:35.931 "hdgst": false, 00:42:35.931 "ddgst": false 00:42:35.931 }, 00:42:35.931 "method": "bdev_nvme_attach_controller" 00:42:35.931 },{ 00:42:35.931 "params": { 00:42:35.931 "name": "Nvme2", 00:42:35.931 "trtype": "tcp", 00:42:35.931 "traddr": "10.0.0.2", 00:42:35.931 "adrfam": "ipv4", 00:42:35.931 "trsvcid": "4420", 00:42:35.931 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:35.931 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:35.931 "hdgst": false, 00:42:35.931 "ddgst": false 00:42:35.931 }, 00:42:35.931 "method": "bdev_nvme_attach_controller" 00:42:35.931 }' 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:35.931 16:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:35.931 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:35.931 ... 00:42:35.931 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:35.931 ... 00:42:35.931 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:35.931 ... 00:42:35.931 fio-3.35 00:42:35.931 Starting 24 threads 00:42:48.124 00:42:48.124 filename0: (groupid=0, jobs=1): err= 0: pid=1307133: Sat Dec 14 16:56:16 2024 00:42:48.124 read: IOPS=62, BW=248KiB/s (254kB/s)(2520KiB/10146msec) 00:42:48.124 slat (nsec): min=7573, max=39742, avg=10602.65, stdev=4460.17 00:42:48.124 clat (msec): min=64, max=445, avg=256.36, stdev=61.64 00:42:48.124 lat (msec): min=64, max=445, avg=256.37, stdev=61.64 00:42:48.124 clat percentiles (msec): 00:42:48.124 | 1.00th=[ 65], 5.00th=[ 161], 10.00th=[ 209], 20.00th=[ 222], 00:42:48.124 | 30.00th=[ 226], 40.00th=[ 236], 50.00th=[ 279], 60.00th=[ 284], 00:42:48.124 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 330], 00:42:48.124 | 99.00th=[ 443], 99.50th=[ 447], 99.90th=[ 447], 99.95th=[ 447], 00:42:48.124 | 99.99th=[ 447] 00:42:48.124 bw ( KiB/s): min= 176, max= 384, per=4.43%, avg=245.60, stdev=55.98, samples=20 00:42:48.124 iops : min= 44, max= 96, avg=61.40, stdev=13.99, samples=20 00:42:48.124 lat (msec) : 100=4.76%, 250=37.78%, 500=57.46% 00:42:48.124 cpu : usr=98.38%, sys=1.21%, ctx=13, majf=0, minf=9 00:42:48.124 IO depths : 1=0.3%, 2=0.8%, 4=7.1%, 8=79.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:42:48.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.124 complete : 0=0.0%, 4=88.9%, 8=6.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.124 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.124 filename0: (groupid=0, jobs=1): err= 0: pid=1307135: Sat Dec 14 16:56:16 2024 00:42:48.124 read: IOPS=56, BW=226KiB/s (232kB/s)(2288KiB/10106msec) 00:42:48.124 slat (nsec): min=6349, max=27432, avg=9589.94, stdev=4120.31 00:42:48.124 clat (msec): min=173, max=548, avg=282.39, stdev=68.19 00:42:48.124 lat (msec): min=173, max=548, avg=282.40, stdev=68.19 00:42:48.124 clat percentiles (msec): 00:42:48.124 | 1.00th=[ 174], 5.00th=[ 207], 10.00th=[ 215], 20.00th=[ 224], 00:42:48.124 | 30.00th=[ 228], 40.00th=[ 255], 50.00th=[ 266], 60.00th=[ 288], 00:42:48.124 | 70.00th=[ 305], 80.00th=[ 313], 90.00th=[ 401], 95.00th=[ 443], 00:42:48.124 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 550], 99.95th=[ 550], 00:42:48.124 | 99.99th=[ 550] 00:42:48.124 bw ( KiB/s): min= 112, max= 304, per=4.02%, avg=222.40, stdev=49.49, samples=20 00:42:48.124 iops : min= 28, max= 76, avg=55.60, stdev=12.37, samples=20 00:42:48.124 lat (msec) : 250=34.97%, 500=64.69%, 750=0.35% 00:42:48.124 cpu : usr=99.02%, sys=0.67%, ctx=9, majf=0, minf=9 00:42:48.124 IO depths : 1=0.2%, 2=0.5%, 4=6.1%, 8=80.1%, 16=13.1%, 32=0.0%, >=64=0.0% 00:42:48.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.124 complete : 0=0.0%, 4=88.4%, 8=7.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.124 issued rwts: total=572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.124 filename0: (groupid=0, jobs=1): err= 0: pid=1307136: Sat Dec 14 16:56:16 2024 00:42:48.124 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10139msec) 00:42:48.124 slat (nsec): min=7649, max=45648, avg=15685.12, stdev=5272.65 00:42:48.124 clat (msec): min=198, max=297, avg=259.32, stdev=30.77 00:42:48.124 lat (msec): min=198, max=297, avg=259.34, stdev=30.77 00:42:48.124 clat percentiles (msec): 00:42:48.124 | 1.00th=[ 199], 5.00th=[ 209], 10.00th=[ 213], 20.00th=[ 222], 00:42:48.124 | 30.00th=[ 230], 40.00th=[ 271], 50.00th=[ 279], 60.00th=[ 284], 00:42:48.124 | 70.00th=[ 284], 80.00th=[ 284], 90.00th=[ 288], 95.00th=[ 288], 00:42:48.124 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 300], 00:42:48.124 | 99.99th=[ 300] 00:42:48.124 bw ( KiB/s): min= 144, max= 368, per=4.40%, avg=243.20, stdev=50.22, samples=20 00:42:48.124 iops : min= 36, max= 92, avg=60.80, stdev=12.56, samples=20 00:42:48.124 lat (msec) : 250=35.90%, 500=64.10% 00:42:48.124 cpu : usr=98.75%, sys=0.82%, ctx=12, majf=0, minf=9 00:42:48.124 IO depths : 1=0.6%, 2=6.9%, 4=25.0%, 8=55.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:42:48.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.124 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.124 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.124 filename0: (groupid=0, jobs=1): err= 0: pid=1307137: Sat Dec 14 16:56:16 2024 00:42:48.124 read: IOPS=59, BW=239KiB/s (244kB/s)(2416KiB/10122msec) 00:42:48.124 slat (nsec): min=6483, max=22024, avg=9525.88, stdev=2503.70 00:42:48.124 clat (msec): min=159, max=416, avg=267.68, stdev=35.23 00:42:48.124 lat (msec): min=159, max=416, avg=267.69, stdev=35.23 00:42:48.124 clat percentiles (msec): 00:42:48.124 | 1.00th=[ 161], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 226], 00:42:48.124 | 30.00th=[ 245], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 284], 00:42:48.124 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 288], 95.00th=[ 313], 00:42:48.124 | 99.00th=[ 347], 99.50th=[ 376], 99.90th=[ 418], 99.95th=[ 418], 00:42:48.124 | 99.99th=[ 418] 00:42:48.124 bw ( KiB/s): min= 128, max= 304, per=4.25%, avg=235.20, stdev=44.99, samples=20 00:42:48.124 iops : min= 32, max= 76, avg=58.80, stdev=11.25, samples=20 00:42:48.124 lat (msec) : 250=30.13%, 500=69.87% 00:42:48.124 cpu : usr=98.69%, sys=0.92%, ctx=15, majf=0, minf=9 00:42:48.124 IO depths : 1=0.7%, 2=1.5%, 4=8.6%, 8=77.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:42:48.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.124 complete : 0=0.0%, 4=89.4%, 8=5.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.124 issued rwts: total=604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.124 filename0: (groupid=0, jobs=1): err= 0: pid=1307138: Sat Dec 14 16:56:16 2024 00:42:48.124 read: IOPS=40, BW=164KiB/s (168kB/s)(1656KiB/10107msec) 00:42:48.124 slat (nsec): min=4536, max=33075, avg=9610.17, stdev=3969.21 00:42:48.124 clat (msec): min=147, max=637, avg=390.40, stdev=88.28 00:42:48.124 lat (msec): min=147, max=637, avg=390.41, stdev=88.28 00:42:48.124 clat percentiles (msec): 00:42:48.124 | 1.00th=[ 148], 5.00th=[ 232], 10.00th=[ 279], 20.00th=[ 313], 00:42:48.124 | 30.00th=[ 363], 40.00th=[ 393], 50.00th=[ 405], 60.00th=[ 409], 00:42:48.124 | 70.00th=[ 443], 80.00th=[ 447], 90.00th=[ 451], 95.00th=[ 567], 00:42:48.124 | 99.00th=[ 567], 99.50th=[ 567], 99.90th=[ 634], 99.95th=[ 634], 00:42:48.124 | 99.99th=[ 634] 00:42:48.124 bw ( KiB/s): min= 112, max= 256, per=3.02%, avg=167.58, stdev=60.19, samples=19 00:42:48.124 iops : min= 28, max= 64, avg=41.89, stdev=15.05, samples=19 00:42:48.124 lat (msec) : 250=5.31%, 500=86.96%, 750=7.73% 00:42:48.124 cpu : usr=98.67%, sys=0.93%, ctx=11, majf=0, minf=9 00:42:48.124 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:42:48.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.124 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.124 issued rwts: total=414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.124 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.124 filename0: (groupid=0, jobs=1): err= 0: pid=1307139: Sat Dec 14 16:56:16 2024 00:42:48.124 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10157msec) 00:42:48.124 slat (nsec): min=7646, max=34545, avg=13526.39, stdev=4455.20 00:42:48.124 clat (msec): min=198, max=317, avg=259.43, stdev=30.78 00:42:48.125 lat (msec): min=198, max=317, avg=259.45, stdev=30.78 00:42:48.125 clat percentiles (msec): 00:42:48.125 | 1.00th=[ 199], 5.00th=[ 211], 10.00th=[ 215], 20.00th=[ 222], 00:42:48.125 | 30.00th=[ 230], 40.00th=[ 271], 50.00th=[ 279], 60.00th=[ 284], 00:42:48.125 | 70.00th=[ 284], 80.00th=[ 284], 90.00th=[ 288], 95.00th=[ 288], 00:42:48.125 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 317], 99.95th=[ 317], 00:42:48.125 | 99.99th=[ 317] 00:42:48.125 bw ( KiB/s): min= 144, max= 368, per=4.40%, avg=243.20, stdev=50.49, samples=20 00:42:48.125 iops : min= 36, max= 92, avg=60.80, stdev=12.62, samples=20 00:42:48.125 lat (msec) : 250=36.22%, 500=63.78% 00:42:48.125 cpu : usr=98.65%, sys=0.96%, ctx=14, majf=0, minf=9 00:42:48.125 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:42:48.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.125 filename0: (groupid=0, jobs=1): err= 0: pid=1307140: Sat Dec 14 16:56:16 2024 00:42:48.125 read: IOPS=58, BW=236KiB/s (241kB/s)(2384KiB/10116msec) 00:42:48.125 slat (nsec): min=5232, max=32784, avg=9295.90, stdev=2275.54 00:42:48.125 clat (msec): min=181, max=455, avg=270.89, stdev=53.99 00:42:48.125 lat (msec): min=181, max=455, avg=270.90, stdev=53.99 00:42:48.125 clat percentiles (msec): 00:42:48.125 | 1.00th=[ 182], 5.00th=[ 209], 10.00th=[ 211], 20.00th=[ 224], 00:42:48.125 | 30.00th=[ 232], 40.00th=[ 264], 50.00th=[ 279], 60.00th=[ 284], 00:42:48.125 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 321], 95.00th=[ 405], 00:42:48.125 | 99.00th=[ 451], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:42:48.125 | 99.99th=[ 456] 00:42:48.125 bw ( KiB/s): min= 128, max= 304, per=4.18%, avg=232.00, stdev=46.29, samples=20 00:42:48.125 iops : min= 32, max= 76, avg=58.00, stdev=11.57, samples=20 00:42:48.125 lat (msec) : 250=36.58%, 500=63.42% 00:42:48.125 cpu : usr=98.86%, sys=0.74%, ctx=13, majf=0, minf=9 00:42:48.125 IO depths : 1=0.3%, 2=1.5%, 4=9.4%, 8=76.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:42:48.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 complete : 0=0.0%, 4=89.6%, 8=5.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 issued rwts: total=596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.125 filename0: (groupid=0, jobs=1): err= 0: pid=1307141: Sat Dec 14 16:56:16 2024 00:42:48.125 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10146msec) 00:42:48.125 slat (nsec): min=7534, max=38699, avg=12491.42, stdev=6535.23 00:42:48.125 clat (msec): min=64, max=446, avg=259.36, stdev=63.92 00:42:48.125 lat (msec): min=64, max=446, avg=259.37, stdev=63.92 00:42:48.125 clat percentiles (msec): 00:42:48.125 | 1.00th=[ 65], 5.00th=[ 73], 10.00th=[ 213], 20.00th=[ 226], 00:42:48.125 | 30.00th=[ 234], 40.00th=[ 243], 50.00th=[ 279], 60.00th=[ 284], 00:42:48.125 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 321], 95.00th=[ 330], 00:42:48.125 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 447], 99.95th=[ 447], 00:42:48.125 | 99.99th=[ 447] 00:42:48.125 bw ( KiB/s): min= 176, max= 384, per=4.40%, avg=243.20, stdev=51.28, samples=20 00:42:48.125 iops : min= 44, max= 96, avg=60.80, stdev=12.82, samples=20 00:42:48.125 lat (msec) : 100=5.13%, 250=35.26%, 500=59.62% 00:42:48.125 cpu : usr=98.80%, sys=0.81%, ctx=12, majf=0, minf=9 00:42:48.125 IO depths : 1=1.0%, 2=2.6%, 4=10.6%, 8=74.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:42:48.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 complete : 0=0.0%, 4=89.8%, 8=5.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.125 filename1: (groupid=0, jobs=1): err= 0: pid=1307142: Sat Dec 14 16:56:16 2024 00:42:48.125 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10106msec) 00:42:48.125 slat (nsec): min=7519, max=33196, avg=10159.61, stdev=4555.44 00:42:48.125 clat (msec): min=211, max=570, avg=387.86, stdev=80.60 00:42:48.125 lat (msec): min=211, max=570, avg=387.87, stdev=80.60 00:42:48.125 clat percentiles (msec): 00:42:48.125 | 1.00th=[ 211], 5.00th=[ 230], 10.00th=[ 279], 20.00th=[ 317], 00:42:48.125 | 30.00th=[ 355], 40.00th=[ 401], 50.00th=[ 405], 60.00th=[ 409], 00:42:48.125 | 70.00th=[ 439], 80.00th=[ 443], 90.00th=[ 451], 95.00th=[ 558], 00:42:48.125 | 99.00th=[ 567], 99.50th=[ 567], 99.90th=[ 575], 99.95th=[ 575], 00:42:48.125 | 99.99th=[ 575] 00:42:48.125 bw ( KiB/s): min= 112, max= 256, per=3.04%, avg=168.42, stdev=58.03, samples=19 00:42:48.125 iops : min= 28, max= 64, avg=42.11, stdev=14.51, samples=19 00:42:48.125 lat (msec) : 250=8.17%, 500=86.06%, 750=5.77% 00:42:48.125 cpu : usr=98.97%, sys=0.62%, ctx=13, majf=0, minf=9 00:42:48.125 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:42:48.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.125 filename1: (groupid=0, jobs=1): err= 0: pid=1307143: Sat Dec 14 16:56:16 2024 00:42:48.125 read: IOPS=57, BW=229KiB/s (235kB/s)(2320KiB/10110msec) 00:42:48.125 slat (nsec): min=4926, max=21433, avg=9154.20, stdev=2150.58 00:42:48.125 clat (msec): min=178, max=456, avg=278.22, stdev=65.69 00:42:48.125 lat (msec): min=178, max=456, avg=278.23, stdev=65.69 00:42:48.125 clat percentiles (msec): 00:42:48.125 | 1.00th=[ 180], 5.00th=[ 209], 10.00th=[ 211], 20.00th=[ 224], 00:42:48.125 | 30.00th=[ 228], 40.00th=[ 251], 50.00th=[ 275], 60.00th=[ 284], 00:42:48.125 | 70.00th=[ 288], 80.00th=[ 317], 90.00th=[ 401], 95.00th=[ 430], 00:42:48.125 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:42:48.125 | 99.99th=[ 456] 00:42:48.125 bw ( KiB/s): min= 128, max= 336, per=4.07%, avg=225.60, stdev=50.84, samples=20 00:42:48.125 iops : min= 32, max= 84, avg=56.40, stdev=12.71, samples=20 00:42:48.125 lat (msec) : 250=36.55%, 500=63.45% 00:42:48.125 cpu : usr=98.82%, sys=0.77%, ctx=19, majf=0, minf=9 00:42:48.125 IO depths : 1=0.7%, 2=1.6%, 4=7.9%, 8=77.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:42:48.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 complete : 0=0.0%, 4=89.0%, 8=6.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 issued rwts: total=580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.125 filename1: (groupid=0, jobs=1): err= 0: pid=1307145: Sat Dec 14 16:56:16 2024 00:42:48.125 read: IOPS=57, BW=229KiB/s (234kB/s)(2312KiB/10107msec) 00:42:48.125 slat (nsec): min=7528, max=29527, avg=9505.92, stdev=2578.90 00:42:48.125 clat (msec): min=156, max=447, avg=279.32, stdev=47.89 00:42:48.125 lat (msec): min=156, max=447, avg=279.33, stdev=47.89 00:42:48.125 clat percentiles (msec): 00:42:48.125 | 1.00th=[ 157], 5.00th=[ 211], 10.00th=[ 224], 20.00th=[ 230], 00:42:48.125 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 284], 00:42:48.125 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 359], 95.00th=[ 368], 00:42:48.125 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 447], 99.95th=[ 447], 00:42:48.125 | 99.99th=[ 447] 00:42:48.125 bw ( KiB/s): min= 128, max= 304, per=4.05%, avg=224.80, stdev=45.69, samples=20 00:42:48.125 iops : min= 32, max= 76, avg=56.20, stdev=11.42, samples=20 00:42:48.125 lat (msec) : 250=21.11%, 500=78.89% 00:42:48.125 cpu : usr=98.86%, sys=0.72%, ctx=21, majf=0, minf=9 00:42:48.125 IO depths : 1=0.9%, 2=2.1%, 4=9.7%, 8=75.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:42:48.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 complete : 0=0.0%, 4=89.6%, 8=4.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 issued rwts: total=578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.125 filename1: (groupid=0, jobs=1): err= 0: pid=1307146: Sat Dec 14 16:56:16 2024 00:42:48.125 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10157msec) 00:42:48.125 slat (nsec): min=7678, max=37540, avg=11989.38, stdev=3519.64 00:42:48.125 clat (msec): min=198, max=313, avg=259.44, stdev=30.75 00:42:48.125 lat (msec): min=198, max=313, avg=259.46, stdev=30.75 00:42:48.125 clat percentiles (msec): 00:42:48.125 | 1.00th=[ 199], 5.00th=[ 211], 10.00th=[ 215], 20.00th=[ 222], 00:42:48.125 | 30.00th=[ 230], 40.00th=[ 271], 50.00th=[ 279], 60.00th=[ 284], 00:42:48.125 | 70.00th=[ 284], 80.00th=[ 284], 90.00th=[ 288], 95.00th=[ 288], 00:42:48.125 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 313], 00:42:48.125 | 99.99th=[ 313] 00:42:48.125 bw ( KiB/s): min= 144, max= 368, per=4.40%, avg=243.20, stdev=50.49, samples=20 00:42:48.125 iops : min= 36, max= 92, avg=60.80, stdev=12.62, samples=20 00:42:48.125 lat (msec) : 250=36.22%, 500=63.78% 00:42:48.125 cpu : usr=98.51%, sys=1.08%, ctx=13, majf=0, minf=9 00:42:48.125 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:42:48.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.125 filename1: (groupid=0, jobs=1): err= 0: pid=1307147: Sat Dec 14 16:56:16 2024 00:42:48.125 read: IOPS=62, BW=250KiB/s (256kB/s)(2536KiB/10146msec) 00:42:48.125 slat (nsec): min=7518, max=34224, avg=10920.35, stdev=4668.99 00:42:48.125 clat (msec): min=63, max=452, avg=255.32, stdev=62.78 00:42:48.125 lat (msec): min=63, max=452, avg=255.33, stdev=62.78 00:42:48.125 clat percentiles (msec): 00:42:48.125 | 1.00th=[ 65], 5.00th=[ 73], 10.00th=[ 205], 20.00th=[ 222], 00:42:48.125 | 30.00th=[ 228], 40.00th=[ 241], 50.00th=[ 279], 60.00th=[ 284], 00:42:48.125 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 288], 95.00th=[ 326], 00:42:48.125 | 99.00th=[ 447], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:42:48.125 | 99.99th=[ 451] 00:42:48.125 bw ( KiB/s): min= 176, max= 384, per=4.47%, avg=247.20, stdev=52.81, samples=20 00:42:48.125 iops : min= 44, max= 96, avg=61.80, stdev=13.20, samples=20 00:42:48.125 lat (msec) : 100=5.05%, 250=37.22%, 500=57.73% 00:42:48.125 cpu : usr=98.68%, sys=0.93%, ctx=13, majf=0, minf=9 00:42:48.125 IO depths : 1=0.6%, 2=1.4%, 4=8.2%, 8=77.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:42:48.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 complete : 0=0.0%, 4=89.2%, 8=5.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.125 issued rwts: total=634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.125 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.126 filename1: (groupid=0, jobs=1): err= 0: pid=1307148: Sat Dec 14 16:56:16 2024 00:42:48.126 read: IOPS=61, BW=247KiB/s (252kB/s)(2496KiB/10125msec) 00:42:48.126 slat (nsec): min=6261, max=33424, avg=14712.59, stdev=4571.86 00:42:48.126 clat (msec): min=198, max=302, avg=258.98, stdev=30.89 00:42:48.126 lat (msec): min=198, max=302, avg=258.99, stdev=30.89 00:42:48.126 clat percentiles (msec): 00:42:48.126 | 1.00th=[ 199], 5.00th=[ 209], 10.00th=[ 211], 20.00th=[ 222], 00:42:48.126 | 30.00th=[ 228], 40.00th=[ 271], 50.00th=[ 279], 60.00th=[ 284], 00:42:48.126 | 70.00th=[ 284], 80.00th=[ 284], 90.00th=[ 288], 95.00th=[ 288], 00:42:48.126 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 305], 00:42:48.126 | 99.99th=[ 305] 00:42:48.126 bw ( KiB/s): min= 144, max= 368, per=4.40%, avg=243.20, stdev=50.22, samples=20 00:42:48.126 iops : min= 36, max= 92, avg=60.80, stdev=12.56, samples=20 00:42:48.126 lat (msec) : 250=38.46%, 500=61.54% 00:42:48.126 cpu : usr=98.67%, sys=0.92%, ctx=13, majf=0, minf=9 00:42:48.126 IO depths : 1=0.6%, 2=6.9%, 4=25.0%, 8=55.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:42:48.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.126 filename1: (groupid=0, jobs=1): err= 0: pid=1307149: Sat Dec 14 16:56:16 2024 00:42:48.126 read: IOPS=59, BW=239KiB/s (245kB/s)(2416KiB/10116msec) 00:42:48.126 slat (nsec): min=7506, max=32658, avg=9534.80, stdev=2664.87 00:42:48.126 clat (msec): min=165, max=374, avg=267.46, stdev=36.04 00:42:48.126 lat (msec): min=165, max=374, avg=267.47, stdev=36.04 00:42:48.126 clat percentiles (msec): 00:42:48.126 | 1.00th=[ 167], 5.00th=[ 211], 10.00th=[ 218], 20.00th=[ 226], 00:42:48.126 | 30.00th=[ 251], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 284], 00:42:48.126 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 288], 95.00th=[ 313], 00:42:48.126 | 99.00th=[ 342], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:42:48.126 | 99.99th=[ 376] 00:42:48.126 bw ( KiB/s): min= 128, max= 304, per=4.25%, avg=235.20, stdev=44.99, samples=20 00:42:48.126 iops : min= 32, max= 76, avg=58.80, stdev=11.25, samples=20 00:42:48.126 lat (msec) : 250=29.80%, 500=70.20% 00:42:48.126 cpu : usr=98.61%, sys=1.00%, ctx=11, majf=0, minf=9 00:42:48.126 IO depths : 1=0.7%, 2=1.5%, 4=8.6%, 8=77.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:42:48.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 complete : 0=0.0%, 4=89.4%, 8=5.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 issued rwts: total=604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.126 filename1: (groupid=0, jobs=1): err= 0: pid=1307150: Sat Dec 14 16:56:16 2024 00:42:48.126 read: IOPS=62, BW=248KiB/s (254kB/s)(2520KiB/10146msec) 00:42:48.126 slat (nsec): min=6389, max=33065, avg=10682.76, stdev=5438.73 00:42:48.126 clat (msec): min=60, max=453, avg=256.39, stdev=62.86 00:42:48.126 lat (msec): min=60, max=453, avg=256.40, stdev=62.86 00:42:48.126 clat percentiles (msec): 00:42:48.126 | 1.00th=[ 65], 5.00th=[ 77], 10.00th=[ 211], 20.00th=[ 226], 00:42:48.126 | 30.00th=[ 230], 40.00th=[ 234], 50.00th=[ 279], 60.00th=[ 284], 00:42:48.126 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 334], 00:42:48.126 | 99.00th=[ 451], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:42:48.126 | 99.99th=[ 456] 00:42:48.126 bw ( KiB/s): min= 176, max= 384, per=4.43%, avg=245.60, stdev=55.98, samples=20 00:42:48.126 iops : min= 44, max= 96, avg=61.40, stdev=13.99, samples=20 00:42:48.126 lat (msec) : 100=5.08%, 250=36.19%, 500=58.73% 00:42:48.126 cpu : usr=98.83%, sys=0.82%, ctx=20, majf=0, minf=9 00:42:48.126 IO depths : 1=0.5%, 2=1.3%, 4=8.1%, 8=77.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:42:48.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 complete : 0=0.0%, 4=89.1%, 8=5.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.126 filename2: (groupid=0, jobs=1): err= 0: pid=1307151: Sat Dec 14 16:56:16 2024 00:42:48.126 read: IOPS=58, BW=233KiB/s (238kB/s)(2352KiB/10116msec) 00:42:48.126 slat (nsec): min=7535, max=31983, avg=9705.19, stdev=3114.51 00:42:48.126 clat (msec): min=162, max=451, avg=274.49, stdev=47.56 00:42:48.126 lat (msec): min=162, max=451, avg=274.50, stdev=47.56 00:42:48.126 clat percentiles (msec): 00:42:48.126 | 1.00th=[ 163], 5.00th=[ 211], 10.00th=[ 218], 20.00th=[ 228], 00:42:48.126 | 30.00th=[ 259], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 284], 00:42:48.126 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 309], 95.00th=[ 355], 00:42:48.126 | 99.00th=[ 447], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:42:48.126 | 99.99th=[ 451] 00:42:48.126 bw ( KiB/s): min= 128, max= 304, per=4.13%, avg=228.80, stdev=49.00, samples=20 00:42:48.126 iops : min= 32, max= 76, avg=57.20, stdev=12.25, samples=20 00:42:48.126 lat (msec) : 250=28.57%, 500=71.43% 00:42:48.126 cpu : usr=98.69%, sys=0.90%, ctx=13, majf=0, minf=9 00:42:48.126 IO depths : 1=0.5%, 2=1.2%, 4=7.7%, 8=78.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:42:48.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 complete : 0=0.0%, 4=89.0%, 8=6.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.126 filename2: (groupid=0, jobs=1): err= 0: pid=1307152: Sat Dec 14 16:56:16 2024 00:42:48.126 read: IOPS=52, BW=210KiB/s (215kB/s)(2128KiB/10117msec) 00:42:48.126 slat (nsec): min=5013, max=27202, avg=9676.50, stdev=4372.52 00:42:48.126 clat (msec): min=157, max=567, avg=303.40, stdev=72.98 00:42:48.126 lat (msec): min=157, max=567, avg=303.41, stdev=72.98 00:42:48.126 clat percentiles (msec): 00:42:48.126 | 1.00th=[ 159], 5.00th=[ 222], 10.00th=[ 224], 20.00th=[ 271], 00:42:48.126 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:42:48.126 | 70.00th=[ 296], 80.00th=[ 347], 90.00th=[ 405], 95.00th=[ 447], 00:42:48.126 | 99.00th=[ 567], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:42:48.126 | 99.99th=[ 567] 00:42:48.126 bw ( KiB/s): min= 144, max= 256, per=3.93%, avg=217.26, stdev=38.58, samples=19 00:42:48.126 iops : min= 36, max= 64, avg=54.32, stdev= 9.64, samples=19 00:42:48.126 lat (msec) : 250=15.04%, 500=81.95%, 750=3.01% 00:42:48.126 cpu : usr=98.85%, sys=0.81%, ctx=14, majf=0, minf=9 00:42:48.126 IO depths : 1=0.9%, 2=2.6%, 4=10.3%, 8=73.9%, 16=12.2%, 32=0.0%, >=64=0.0% 00:42:48.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 issued rwts: total=532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.126 filename2: (groupid=0, jobs=1): err= 0: pid=1307153: Sat Dec 14 16:56:16 2024 00:42:48.126 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10109msec) 00:42:48.126 slat (nsec): min=7559, max=30015, avg=9377.70, stdev=2552.20 00:42:48.126 clat (msec): min=176, max=451, avg=280.18, stdev=67.13 00:42:48.126 lat (msec): min=176, max=452, avg=280.19, stdev=67.13 00:42:48.126 clat percentiles (msec): 00:42:48.126 | 1.00th=[ 178], 5.00th=[ 207], 10.00th=[ 211], 20.00th=[ 224], 00:42:48.126 | 30.00th=[ 230], 40.00th=[ 251], 50.00th=[ 271], 60.00th=[ 284], 00:42:48.126 | 70.00th=[ 305], 80.00th=[ 317], 90.00th=[ 401], 95.00th=[ 430], 00:42:48.126 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:42:48.126 | 99.99th=[ 451] 00:42:48.126 bw ( KiB/s): min= 128, max= 304, per=4.05%, avg=224.00, stdev=47.58, samples=20 00:42:48.126 iops : min= 32, max= 76, avg=56.00, stdev=11.89, samples=20 00:42:48.126 lat (msec) : 250=39.24%, 500=60.76% 00:42:48.126 cpu : usr=98.82%, sys=0.79%, ctx=12, majf=0, minf=9 00:42:48.126 IO depths : 1=0.5%, 2=1.2%, 4=7.3%, 8=78.3%, 16=12.7%, 32=0.0%, >=64=0.0% 00:42:48.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 complete : 0=0.0%, 4=88.8%, 8=6.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.126 filename2: (groupid=0, jobs=1): err= 0: pid=1307155: Sat Dec 14 16:56:16 2024 00:42:48.126 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10106msec) 00:42:48.126 slat (nsec): min=4995, max=35104, avg=9967.99, stdev=4039.54 00:42:48.126 clat (msec): min=147, max=572, avg=388.59, stdev=81.56 00:42:48.126 lat (msec): min=147, max=572, avg=388.60, stdev=81.56 00:42:48.126 clat percentiles (msec): 00:42:48.126 | 1.00th=[ 222], 5.00th=[ 241], 10.00th=[ 279], 20.00th=[ 313], 00:42:48.126 | 30.00th=[ 363], 40.00th=[ 376], 50.00th=[ 401], 60.00th=[ 405], 00:42:48.126 | 70.00th=[ 439], 80.00th=[ 447], 90.00th=[ 456], 95.00th=[ 567], 00:42:48.126 | 99.00th=[ 567], 99.50th=[ 567], 99.90th=[ 575], 99.95th=[ 575], 00:42:48.126 | 99.99th=[ 575] 00:42:48.126 bw ( KiB/s): min= 112, max= 256, per=3.04%, avg=168.42, stdev=56.28, samples=19 00:42:48.126 iops : min= 28, max= 64, avg=42.11, stdev=14.07, samples=19 00:42:48.126 lat (msec) : 250=5.29%, 500=87.02%, 750=7.69% 00:42:48.126 cpu : usr=98.72%, sys=0.88%, ctx=12, majf=0, minf=9 00:42:48.126 IO depths : 1=3.6%, 2=9.6%, 4=24.3%, 8=53.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:42:48.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.126 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.126 filename2: (groupid=0, jobs=1): err= 0: pid=1307156: Sat Dec 14 16:56:16 2024 00:42:48.126 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10122msec) 00:42:48.126 slat (nsec): min=6677, max=37250, avg=13244.12, stdev=4843.70 00:42:48.126 clat (msec): min=198, max=302, avg=258.90, stdev=30.94 00:42:48.126 lat (msec): min=198, max=302, avg=258.91, stdev=30.94 00:42:48.126 clat percentiles (msec): 00:42:48.126 | 1.00th=[ 199], 5.00th=[ 209], 10.00th=[ 213], 20.00th=[ 222], 00:42:48.126 | 30.00th=[ 228], 40.00th=[ 271], 50.00th=[ 279], 60.00th=[ 284], 00:42:48.126 | 70.00th=[ 284], 80.00th=[ 284], 90.00th=[ 288], 95.00th=[ 288], 00:42:48.126 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 305], 00:42:48.126 | 99.99th=[ 305] 00:42:48.126 bw ( KiB/s): min= 144, max= 368, per=4.40%, avg=243.20, stdev=50.22, samples=20 00:42:48.126 iops : min= 36, max= 92, avg=60.80, stdev=12.56, samples=20 00:42:48.126 lat (msec) : 250=38.46%, 500=61.54% 00:42:48.126 cpu : usr=98.69%, sys=0.91%, ctx=12, majf=0, minf=9 00:42:48.126 IO depths : 1=0.5%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:42:48.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.126 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.127 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.127 filename2: (groupid=0, jobs=1): err= 0: pid=1307157: Sat Dec 14 16:56:16 2024 00:42:48.127 read: IOPS=62, BW=249KiB/s (255kB/s)(2528KiB/10143msec) 00:42:48.127 slat (nsec): min=7537, max=31886, avg=10639.77, stdev=5114.54 00:42:48.127 clat (msec): min=64, max=432, avg=255.95, stdev=56.33 00:42:48.127 lat (msec): min=64, max=432, avg=255.96, stdev=56.33 00:42:48.127 clat percentiles (msec): 00:42:48.127 | 1.00th=[ 65], 5.00th=[ 70], 10.00th=[ 211], 20.00th=[ 224], 00:42:48.127 | 30.00th=[ 230], 40.00th=[ 275], 50.00th=[ 284], 60.00th=[ 284], 00:42:48.127 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 288], 95.00th=[ 305], 00:42:48.127 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 435], 99.95th=[ 435], 00:42:48.127 | 99.99th=[ 435] 00:42:48.127 bw ( KiB/s): min= 176, max= 384, per=4.45%, avg=246.40, stdev=51.23, samples=20 00:42:48.127 iops : min= 44, max= 96, avg=61.60, stdev=12.81, samples=20 00:42:48.127 lat (msec) : 100=5.06%, 250=31.65%, 500=63.29% 00:42:48.127 cpu : usr=98.70%, sys=0.90%, ctx=20, majf=0, minf=9 00:42:48.127 IO depths : 1=0.6%, 2=1.6%, 4=8.9%, 8=76.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:42:48.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.127 complete : 0=0.0%, 4=89.4%, 8=5.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.127 issued rwts: total=632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.127 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.127 filename2: (groupid=0, jobs=1): err= 0: pid=1307158: Sat Dec 14 16:56:16 2024 00:42:48.127 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10106msec) 00:42:48.127 slat (nsec): min=4078, max=31640, avg=9859.21, stdev=3179.61 00:42:48.127 clat (msec): min=183, max=456, avg=272.39, stdev=56.33 00:42:48.127 lat (msec): min=183, max=456, avg=272.40, stdev=56.33 00:42:48.127 clat percentiles (msec): 00:42:48.127 | 1.00th=[ 184], 5.00th=[ 209], 10.00th=[ 211], 20.00th=[ 226], 00:42:48.127 | 30.00th=[ 230], 40.00th=[ 255], 50.00th=[ 279], 60.00th=[ 284], 00:42:48.127 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 321], 95.00th=[ 426], 00:42:48.127 | 99.00th=[ 451], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:42:48.127 | 99.99th=[ 456] 00:42:48.127 bw ( KiB/s): min= 128, max= 368, per=4.16%, avg=230.40, stdev=54.05, samples=20 00:42:48.127 iops : min= 32, max= 92, avg=57.60, stdev=13.51, samples=20 00:42:48.127 lat (msec) : 250=39.36%, 500=60.64% 00:42:48.127 cpu : usr=98.73%, sys=0.88%, ctx=13, majf=0, minf=9 00:42:48.127 IO depths : 1=0.7%, 2=2.2%, 4=10.3%, 8=74.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:42:48.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.127 complete : 0=0.0%, 4=89.8%, 8=5.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.127 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.127 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.127 filename2: (groupid=0, jobs=1): err= 0: pid=1307159: Sat Dec 14 16:56:16 2024 00:42:48.127 read: IOPS=69, BW=277KiB/s (284kB/s)(2816KiB/10162msec) 00:42:48.127 slat (nsec): min=7145, max=62867, avg=20364.05, stdev=8227.67 00:42:48.127 clat (msec): min=2, max=289, avg=230.32, stdev=85.85 00:42:48.127 lat (msec): min=2, max=289, avg=230.34, stdev=85.85 00:42:48.127 clat percentiles (msec): 00:42:48.127 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 65], 20.00th=[ 222], 00:42:48.127 | 30.00th=[ 226], 40.00th=[ 234], 50.00th=[ 275], 60.00th=[ 279], 00:42:48.127 | 70.00th=[ 284], 80.00th=[ 284], 90.00th=[ 288], 95.00th=[ 288], 00:42:48.127 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 292], 99.95th=[ 292], 00:42:48.127 | 99.99th=[ 292] 00:42:48.127 bw ( KiB/s): min= 144, max= 896, per=4.98%, avg=275.20, stdev=154.48, samples=20 00:42:48.127 iops : min= 36, max= 224, avg=68.80, stdev=38.62, samples=20 00:42:48.127 lat (msec) : 4=4.55%, 10=4.55%, 100=4.26%, 250=29.83%, 500=56.82% 00:42:48.127 cpu : usr=98.61%, sys=0.96%, ctx=13, majf=0, minf=9 00:42:48.127 IO depths : 1=1.0%, 2=7.2%, 4=25.0%, 8=55.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:42:48.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.127 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.127 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.127 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:48.127 00:42:48.127 Run status group 0 (all jobs): 00:42:48.127 READ: bw=5527KiB/s (5660kB/s), 164KiB/s-277KiB/s (168kB/s-284kB/s), io=54.9MiB (57.5MB), run=10106-10162msec 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 bdev_null0 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 [2024-12-14 16:56:17.139796] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 bdev_null1 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.127 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:48.128 { 00:42:48.128 "params": { 00:42:48.128 "name": "Nvme$subsystem", 00:42:48.128 "trtype": "$TEST_TRANSPORT", 00:42:48.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:48.128 "adrfam": "ipv4", 00:42:48.128 "trsvcid": "$NVMF_PORT", 00:42:48.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:48.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:48.128 "hdgst": ${hdgst:-false}, 00:42:48.128 "ddgst": ${ddgst:-false} 00:42:48.128 }, 00:42:48.128 "method": "bdev_nvme_attach_controller" 00:42:48.128 } 00:42:48.128 EOF 00:42:48.128 )") 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:48.128 { 00:42:48.128 "params": { 00:42:48.128 "name": "Nvme$subsystem", 00:42:48.128 "trtype": "$TEST_TRANSPORT", 00:42:48.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:48.128 "adrfam": "ipv4", 00:42:48.128 "trsvcid": "$NVMF_PORT", 00:42:48.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:48.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:48.128 "hdgst": ${hdgst:-false}, 00:42:48.128 "ddgst": ${ddgst:-false} 00:42:48.128 }, 00:42:48.128 "method": "bdev_nvme_attach_controller" 00:42:48.128 } 00:42:48.128 EOF 00:42:48.128 )") 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:48.128 "params": { 00:42:48.128 "name": "Nvme0", 00:42:48.128 "trtype": "tcp", 00:42:48.128 "traddr": "10.0.0.2", 00:42:48.128 "adrfam": "ipv4", 00:42:48.128 "trsvcid": "4420", 00:42:48.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:48.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:48.128 "hdgst": false, 00:42:48.128 "ddgst": false 00:42:48.128 }, 00:42:48.128 "method": "bdev_nvme_attach_controller" 00:42:48.128 },{ 00:42:48.128 "params": { 00:42:48.128 "name": "Nvme1", 00:42:48.128 "trtype": "tcp", 00:42:48.128 "traddr": "10.0.0.2", 00:42:48.128 "adrfam": "ipv4", 00:42:48.128 "trsvcid": "4420", 00:42:48.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:48.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:48.128 "hdgst": false, 00:42:48.128 "ddgst": false 00:42:48.128 }, 00:42:48.128 "method": "bdev_nvme_attach_controller" 00:42:48.128 }' 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:48.128 16:56:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:48.128 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:48.128 ... 00:42:48.128 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:48.128 ... 00:42:48.128 fio-3.35 00:42:48.128 Starting 4 threads 00:42:53.459 00:42:53.459 filename0: (groupid=0, jobs=1): err= 0: pid=1309125: Sat Dec 14 16:56:23 2024 00:42:53.459 read: IOPS=2768, BW=21.6MiB/s (22.7MB/s)(108MiB/5004msec) 00:42:53.459 slat (nsec): min=6090, max=54142, avg=9258.78, stdev=4511.12 00:42:53.459 clat (usec): min=755, max=5138, avg=2862.13, stdev=409.90 00:42:53.459 lat (usec): min=767, max=5149, avg=2871.39, stdev=409.83 00:42:53.459 clat percentiles (usec): 00:42:53.459 | 1.00th=[ 1795], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2540], 00:42:53.459 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 2966], 00:42:53.459 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3294], 95.00th=[ 3523], 00:42:53.459 | 99.00th=[ 4047], 99.50th=[ 4359], 99.90th=[ 4817], 99.95th=[ 4948], 00:42:53.459 | 99.99th=[ 5145] 00:42:53.459 bw ( KiB/s): min=21392, max=23248, per=26.05%, avg=22152.00, stdev=531.58, samples=10 00:42:53.459 iops : min= 2674, max= 2906, avg=2769.00, stdev=66.45, samples=10 00:42:53.459 lat (usec) : 1000=0.01% 00:42:53.459 lat (msec) : 2=2.12%, 4=96.74%, 10=1.13% 00:42:53.459 cpu : usr=95.92%, sys=3.78%, ctx=7, majf=0, minf=0 00:42:53.459 IO depths : 1=0.3%, 2=4.5%, 4=67.1%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:53.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.459 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.459 issued rwts: total=13853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:53.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:53.459 filename0: (groupid=0, jobs=1): err= 0: pid=1309126: Sat Dec 14 16:56:23 2024 00:42:53.459 read: IOPS=2634, BW=20.6MiB/s (21.6MB/s)(103MiB/5001msec) 00:42:53.459 slat (nsec): min=6097, max=55992, avg=9346.04, stdev=4807.48 00:42:53.459 clat (usec): min=626, max=5270, avg=3009.01, stdev=460.53 00:42:53.459 lat (usec): min=638, max=5281, avg=3018.36, stdev=460.33 00:42:53.459 clat percentiles (usec): 00:42:53.459 | 1.00th=[ 1991], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2704], 00:42:53.459 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:42:53.459 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3851], 00:42:53.459 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5211], 00:42:53.459 | 99.99th=[ 5276] 00:42:53.459 bw ( KiB/s): min=20208, max=21648, per=24.77%, avg=21058.78, stdev=475.22, samples=9 00:42:53.459 iops : min= 2526, max= 2706, avg=2632.33, stdev=59.41, samples=9 00:42:53.459 lat (usec) : 750=0.05%, 1000=0.04% 00:42:53.459 lat (msec) : 2=0.96%, 4=95.31%, 10=3.64% 00:42:53.459 cpu : usr=95.66%, sys=4.04%, ctx=8, majf=0, minf=9 00:42:53.459 IO depths : 1=0.1%, 2=3.6%, 4=67.6%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:53.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.459 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.459 issued rwts: total=13177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:53.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:53.459 filename1: (groupid=0, jobs=1): err= 0: pid=1309127: Sat Dec 14 16:56:23 2024 00:42:53.459 read: IOPS=2709, BW=21.2MiB/s (22.2MB/s)(106MiB/5002msec) 00:42:53.459 slat (nsec): min=6109, max=55135, avg=9454.57, stdev=4802.61 00:42:53.459 clat (usec): min=845, max=5750, avg=2925.21, stdev=456.47 00:42:53.459 lat (usec): min=855, max=5761, avg=2934.66, stdev=456.36 00:42:53.459 clat percentiles (usec): 00:42:53.459 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2606], 00:42:53.459 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2966], 00:42:53.460 | 70.00th=[ 2999], 80.00th=[ 3163], 90.00th=[ 3425], 95.00th=[ 3752], 00:42:53.460 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5211], 99.95th=[ 5342], 00:42:53.460 | 99.99th=[ 5735] 00:42:53.460 bw ( KiB/s): min=20816, max=22416, per=25.51%, avg=21690.67, stdev=592.70, samples=9 00:42:53.460 iops : min= 2602, max= 2802, avg=2711.33, stdev=74.09, samples=9 00:42:53.460 lat (usec) : 1000=0.01% 00:42:53.460 lat (msec) : 2=1.53%, 4=95.25%, 10=3.21% 00:42:53.460 cpu : usr=95.86%, sys=3.82%, ctx=7, majf=0, minf=0 00:42:53.460 IO depths : 1=0.1%, 2=3.9%, 4=66.5%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:53.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.460 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.460 issued rwts: total=13553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:53.460 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:53.460 filename1: (groupid=0, jobs=1): err= 0: pid=1309129: Sat Dec 14 16:56:23 2024 00:42:53.460 read: IOPS=2519, BW=19.7MiB/s (20.6MB/s)(98.4MiB/5001msec) 00:42:53.460 slat (nsec): min=6172, max=61821, avg=9511.17, stdev=4348.98 00:42:53.460 clat (usec): min=1203, max=5535, avg=3148.18, stdev=494.45 00:42:53.460 lat (usec): min=1210, max=5547, avg=3157.69, stdev=494.17 00:42:53.460 clat percentiles (usec): 00:42:53.460 | 1.00th=[ 2212], 5.00th=[ 2573], 10.00th=[ 2737], 20.00th=[ 2868], 00:42:53.460 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:42:53.460 | 70.00th=[ 3228], 80.00th=[ 3392], 90.00th=[ 3720], 95.00th=[ 4228], 00:42:53.460 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5473], 99.95th=[ 5538], 00:42:53.460 | 99.99th=[ 5538] 00:42:53.460 bw ( KiB/s): min=18976, max=21152, per=23.74%, avg=20183.11, stdev=793.74, samples=9 00:42:53.460 iops : min= 2372, max= 2644, avg=2522.89, stdev=99.22, samples=9 00:42:53.460 lat (msec) : 2=0.40%, 4=92.97%, 10=6.64% 00:42:53.460 cpu : usr=95.98%, sys=3.72%, ctx=6, majf=0, minf=9 00:42:53.460 IO depths : 1=0.1%, 2=2.0%, 4=70.1%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:53.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.460 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.460 issued rwts: total=12599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:53.460 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:53.460 00:42:53.460 Run status group 0 (all jobs): 00:42:53.460 READ: bw=83.0MiB/s (87.1MB/s), 19.7MiB/s-21.6MiB/s (20.6MB/s-22.7MB/s), io=415MiB (436MB), run=5001-5004msec 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.460 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:53.719 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.719 00:42:53.719 real 0m24.545s 00:42:53.719 user 4m55.807s 00:42:53.719 sys 0m4.476s 00:42:53.719 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:53.719 16:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:53.719 ************************************ 00:42:53.719 END TEST fio_dif_rand_params 00:42:53.719 ************************************ 00:42:53.719 16:56:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:53.719 16:56:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:53.719 16:56:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:53.719 16:56:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:53.719 ************************************ 00:42:53.719 START TEST fio_dif_digest 00:42:53.719 ************************************ 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:53.719 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:53.720 bdev_null0 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:53.720 [2024-12-14 16:56:23.648193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:53.720 { 00:42:53.720 "params": { 00:42:53.720 "name": "Nvme$subsystem", 00:42:53.720 "trtype": "$TEST_TRANSPORT", 00:42:53.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:53.720 "adrfam": "ipv4", 00:42:53.720 "trsvcid": "$NVMF_PORT", 00:42:53.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:53.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:53.720 "hdgst": ${hdgst:-false}, 00:42:53.720 "ddgst": ${ddgst:-false} 00:42:53.720 }, 00:42:53.720 "method": "bdev_nvme_attach_controller" 00:42:53.720 } 00:42:53.720 EOF 00:42:53.720 )") 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:53.720 "params": { 00:42:53.720 "name": "Nvme0", 00:42:53.720 "trtype": "tcp", 00:42:53.720 "traddr": "10.0.0.2", 00:42:53.720 "adrfam": "ipv4", 00:42:53.720 "trsvcid": "4420", 00:42:53.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:53.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:53.720 "hdgst": true, 00:42:53.720 "ddgst": true 00:42:53.720 }, 00:42:53.720 "method": "bdev_nvme_attach_controller" 00:42:53.720 }' 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:53.720 16:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:53.979 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:53.979 ... 00:42:53.979 fio-3.35 00:42:53.979 Starting 3 threads 00:43:06.184 00:43:06.184 filename0: (groupid=0, jobs=1): err= 0: pid=1310170: Sat Dec 14 16:56:34 2024 00:43:06.184 read: IOPS=291, BW=36.4MiB/s (38.1MB/s)(366MiB/10047msec) 00:43:06.184 slat (nsec): min=6605, max=41258, avg=17811.83, stdev=7725.52 00:43:06.184 clat (usec): min=5043, max=52444, avg=10273.03, stdev=1265.05 00:43:06.184 lat (usec): min=5053, max=52466, avg=10290.84, stdev=1265.03 00:43:06.184 clat percentiles (usec): 00:43:06.184 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:43:06.184 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:43:06.184 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:43:06.184 | 99.00th=[11863], 99.50th=[12125], 99.90th=[12518], 99.95th=[48497], 00:43:06.184 | 99.99th=[52691] 00:43:06.184 bw ( KiB/s): min=36352, max=38144, per=35.31%, avg=37401.60, stdev=518.03, samples=20 00:43:06.184 iops : min= 284, max= 298, avg=292.20, stdev= 4.05, samples=20 00:43:06.184 lat (msec) : 10=34.40%, 20=65.53%, 50=0.03%, 100=0.03% 00:43:06.184 cpu : usr=96.05%, sys=3.65%, ctx=20, majf=0, minf=24 00:43:06.184 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:06.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.184 issued rwts: total=2924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:06.184 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:06.184 filename0: (groupid=0, jobs=1): err= 0: pid=1310171: Sat Dec 14 16:56:34 2024 00:43:06.184 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(341MiB/10044msec) 00:43:06.184 slat (nsec): min=6582, max=64142, avg=18818.00, stdev=5629.59 00:43:06.184 clat (usec): min=8190, max=49039, avg=11000.85, stdev=1248.46 00:43:06.184 lat (usec): min=8209, max=49051, avg=11019.67, stdev=1247.77 00:43:06.184 clat percentiles (usec): 00:43:06.184 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:43:06.184 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:43:06.184 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:43:06.184 | 99.00th=[12911], 99.50th=[13304], 99.90th=[15533], 99.95th=[45876], 00:43:06.184 | 99.99th=[49021] 00:43:06.184 bw ( KiB/s): min=33024, max=36608, per=32.97%, avg=34918.40, stdev=896.10, samples=20 00:43:06.184 iops : min= 258, max= 286, avg=272.80, stdev= 7.00, samples=20 00:43:06.184 lat (msec) : 10=9.78%, 20=90.15%, 50=0.07% 00:43:06.184 cpu : usr=94.10%, sys=3.92%, ctx=581, majf=0, minf=25 00:43:06.184 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:06.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.184 issued rwts: total=2730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:06.185 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:06.185 filename0: (groupid=0, jobs=1): err= 0: pid=1310172: Sat Dec 14 16:56:34 2024 00:43:06.185 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(332MiB/10045msec) 00:43:06.185 slat (nsec): min=6630, max=43960, avg=17434.36, stdev=7526.39 00:43:06.185 clat (usec): min=8755, max=48508, avg=11297.74, stdev=1211.10 00:43:06.185 lat (usec): min=8767, max=48536, avg=11315.17, stdev=1211.62 00:43:06.185 clat percentiles (usec): 00:43:06.185 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:43:06.185 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:43:06.185 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:43:06.185 | 99.00th=[13173], 99.50th=[13304], 99.90th=[14091], 99.95th=[44303], 00:43:06.185 | 99.99th=[48497] 00:43:06.185 bw ( KiB/s): min=33024, max=35328, per=32.11%, avg=34009.60, stdev=606.23, samples=20 00:43:06.185 iops : min= 258, max= 276, avg=265.70, stdev= 4.74, samples=20 00:43:06.185 lat (msec) : 10=3.87%, 20=96.05%, 50=0.08% 00:43:06.185 cpu : usr=96.17%, sys=3.53%, ctx=16, majf=0, minf=23 00:43:06.185 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:06.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.185 issued rwts: total=2659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:06.185 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:06.185 00:43:06.185 Run status group 0 (all jobs): 00:43:06.185 READ: bw=103MiB/s (108MB/s), 33.1MiB/s-36.4MiB/s (34.7MB/s-38.1MB/s), io=1039MiB (1090MB), run=10044-10047msec 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.185 00:43:06.185 real 0m11.054s 00:43:06.185 user 0m35.670s 00:43:06.185 sys 0m1.393s 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:06.185 16:56:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:06.185 ************************************ 00:43:06.185 END TEST fio_dif_digest 00:43:06.185 ************************************ 00:43:06.185 16:56:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:06.185 16:56:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:06.185 rmmod nvme_tcp 00:43:06.185 rmmod nvme_fabrics 00:43:06.185 rmmod nvme_keyring 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1301784 ']' 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1301784 00:43:06.185 16:56:34 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1301784 ']' 00:43:06.185 16:56:34 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1301784 00:43:06.185 16:56:34 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:06.185 16:56:34 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:06.185 16:56:34 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1301784 00:43:06.185 16:56:34 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:06.185 16:56:34 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:06.185 16:56:34 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1301784' 00:43:06.185 killing process with pid 1301784 00:43:06.185 16:56:34 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1301784 00:43:06.185 16:56:34 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1301784 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:06.185 16:56:34 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:07.564 Waiting for block devices as requested 00:43:07.823 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:07.823 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:07.823 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:08.081 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:08.081 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:08.081 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:08.340 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:08.340 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:08.340 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:08.598 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:08.598 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:08.598 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:08.598 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:08.857 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:08.857 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:08.857 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:09.115 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:09.115 16:56:39 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:09.115 16:56:39 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:09.115 16:56:39 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:09.115 16:56:39 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:09.115 16:56:39 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:09.115 16:56:39 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:09.115 16:56:39 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:09.115 16:56:39 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:09.115 16:56:39 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:09.115 16:56:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:09.115 16:56:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:11.647 16:56:41 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:11.647 00:43:11.647 real 1m14.093s 00:43:11.647 user 7m14.606s 00:43:11.647 sys 0m18.859s 00:43:11.647 16:56:41 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:11.647 16:56:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:11.647 ************************************ 00:43:11.647 END TEST nvmf_dif 00:43:11.647 ************************************ 00:43:11.647 16:56:41 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:11.647 16:56:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:11.647 16:56:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:11.647 16:56:41 -- common/autotest_common.sh@10 -- # set +x 00:43:11.647 ************************************ 00:43:11.647 START TEST nvmf_abort_qd_sizes 00:43:11.647 ************************************ 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:11.647 * Looking for test storage... 00:43:11.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:11.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.647 --rc genhtml_branch_coverage=1 00:43:11.647 --rc genhtml_function_coverage=1 00:43:11.647 --rc genhtml_legend=1 00:43:11.647 --rc geninfo_all_blocks=1 00:43:11.647 --rc geninfo_unexecuted_blocks=1 00:43:11.647 00:43:11.647 ' 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:11.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.647 --rc genhtml_branch_coverage=1 00:43:11.647 --rc genhtml_function_coverage=1 00:43:11.647 --rc genhtml_legend=1 00:43:11.647 --rc geninfo_all_blocks=1 00:43:11.647 --rc geninfo_unexecuted_blocks=1 00:43:11.647 00:43:11.647 ' 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:11.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.647 --rc genhtml_branch_coverage=1 00:43:11.647 --rc genhtml_function_coverage=1 00:43:11.647 --rc genhtml_legend=1 00:43:11.647 --rc geninfo_all_blocks=1 00:43:11.647 --rc geninfo_unexecuted_blocks=1 00:43:11.647 00:43:11.647 ' 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:11.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.647 --rc genhtml_branch_coverage=1 00:43:11.647 --rc genhtml_function_coverage=1 00:43:11.647 --rc genhtml_legend=1 00:43:11.647 --rc geninfo_all_blocks=1 00:43:11.647 --rc geninfo_unexecuted_blocks=1 00:43:11.647 00:43:11.647 ' 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:11.647 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:11.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:11.648 16:56:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:16.922 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:16.922 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:16.922 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:16.922 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:16.923 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:16.923 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:16.923 Found net devices under 0000:af:00.0: cvl_0_0 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:16.923 Found net devices under 0000:af:00.1: cvl_0_1 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:16.923 16:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:17.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:17.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:43:17.182 00:43:17.182 --- 10.0.0.2 ping statistics --- 00:43:17.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:17.182 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:17.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:17.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:43:17.182 00:43:17.182 --- 10.0.0.1 ping statistics --- 00:43:17.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:17.182 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:17.182 16:56:47 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:20.472 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:20.472 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:20.731 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:20.989 16:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:20.989 16:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:20.989 16:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:20.989 16:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:20.989 16:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1317824 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1317824 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1317824 ']' 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:20.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:20.990 16:56:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:20.990 [2024-12-14 16:56:51.021108] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:20.990 [2024-12-14 16:56:51.021157] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:21.248 [2024-12-14 16:56:51.100834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:21.248 [2024-12-14 16:56:51.125011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:21.248 [2024-12-14 16:56:51.125048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:21.248 [2024-12-14 16:56:51.125055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:21.248 [2024-12-14 16:56:51.125062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:21.248 [2024-12-14 16:56:51.125067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:21.248 [2024-12-14 16:56:51.126550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:21.248 [2024-12-14 16:56:51.126658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:21.248 [2024-12-14 16:56:51.126691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:21.248 [2024-12-14 16:56:51.126692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:21.248 16:56:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:21.249 16:56:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:21.249 16:56:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:21.249 16:56:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:21.249 16:56:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:21.249 ************************************ 00:43:21.249 START TEST spdk_target_abort 00:43:21.249 ************************************ 00:43:21.249 16:56:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:21.249 16:56:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:21.249 16:56:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:21.249 16:56:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.249 16:56:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:24.530 spdk_targetn1 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:24.530 [2024-12-14 16:56:54.133611] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:24.530 [2024-12-14 16:56:54.181985] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:24.530 16:56:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:27.813 Initializing NVMe Controllers 00:43:27.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:27.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:27.813 Initialization complete. Launching workers. 00:43:27.813 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17713, failed: 0 00:43:27.813 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1373, failed to submit 16340 00:43:27.813 success 759, unsuccessful 614, failed 0 00:43:27.813 16:56:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:27.813 16:56:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:31.097 Initializing NVMe Controllers 00:43:31.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:31.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:31.097 Initialization complete. Launching workers. 00:43:31.097 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8461, failed: 0 00:43:31.097 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1223, failed to submit 7238 00:43:31.097 success 342, unsuccessful 881, failed 0 00:43:31.097 16:57:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:31.097 16:57:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:34.393 Initializing NVMe Controllers 00:43:34.393 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:34.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:34.393 Initialization complete. Launching workers. 00:43:34.393 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38764, failed: 0 00:43:34.393 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2810, failed to submit 35954 00:43:34.393 success 593, unsuccessful 2217, failed 0 00:43:34.393 16:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:34.393 16:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.393 16:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:34.393 16:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.393 16:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:34.393 16:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.393 16:57:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1317824 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1317824 ']' 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1317824 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1317824 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1317824' 00:43:35.768 killing process with pid 1317824 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1317824 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1317824 00:43:35.768 00:43:35.768 real 0m14.335s 00:43:35.768 user 0m54.835s 00:43:35.768 sys 0m2.405s 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:35.768 ************************************ 00:43:35.768 END TEST spdk_target_abort 00:43:35.768 ************************************ 00:43:35.768 16:57:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:35.768 16:57:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:35.768 16:57:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:35.768 16:57:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:35.768 ************************************ 00:43:35.768 START TEST kernel_target_abort 00:43:35.768 ************************************ 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:35.768 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:35.769 16:57:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:38.306 Waiting for block devices as requested 00:43:38.565 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:38.565 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:38.565 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:38.825 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:38.825 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:38.825 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:39.084 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:39.084 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:39.084 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:39.084 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:39.344 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:39.344 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:39.344 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:39.603 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:39.603 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:39.603 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:39.863 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:39.863 No valid GPT data, bailing 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:43:39.863 00:43:39.863 Discovery Log Number of Records 2, Generation counter 2 00:43:39.863 =====Discovery Log Entry 0====== 00:43:39.863 trtype: tcp 00:43:39.863 adrfam: ipv4 00:43:39.863 subtype: current discovery subsystem 00:43:39.863 treq: not specified, sq flow control disable supported 00:43:39.863 portid: 1 00:43:39.863 trsvcid: 4420 00:43:39.863 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:39.863 traddr: 10.0.0.1 00:43:39.863 eflags: none 00:43:39.863 sectype: none 00:43:39.863 =====Discovery Log Entry 1====== 00:43:39.863 trtype: tcp 00:43:39.863 adrfam: ipv4 00:43:39.863 subtype: nvme subsystem 00:43:39.863 treq: not specified, sq flow control disable supported 00:43:39.863 portid: 1 00:43:39.863 trsvcid: 4420 00:43:39.863 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:39.863 traddr: 10.0.0.1 00:43:39.863 eflags: none 00:43:39.863 sectype: none 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:39.863 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:40.123 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:40.123 16:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:43.412 Initializing NVMe Controllers 00:43:43.412 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:43.412 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:43.412 Initialization complete. Launching workers. 00:43:43.412 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95130, failed: 0 00:43:43.412 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95130, failed to submit 0 00:43:43.412 success 0, unsuccessful 95130, failed 0 00:43:43.412 16:57:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:43.412 16:57:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:46.774 Initializing NVMe Controllers 00:43:46.774 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:46.774 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:46.774 Initialization complete. Launching workers. 00:43:46.774 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 151449, failed: 0 00:43:46.774 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38042, failed to submit 113407 00:43:46.774 success 0, unsuccessful 38042, failed 0 00:43:46.774 16:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:46.774 16:57:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:49.318 Initializing NVMe Controllers 00:43:49.318 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:49.318 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:49.318 Initialization complete. Launching workers. 00:43:49.318 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142266, failed: 0 00:43:49.318 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35630, failed to submit 106636 00:43:49.318 success 0, unsuccessful 35630, failed 0 00:43:49.318 16:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:49.318 16:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:49.318 16:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:43:49.318 16:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:49.318 16:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:49.318 16:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:49.318 16:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:49.318 16:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:43:49.318 16:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:43:49.318 16:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:52.608 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:52.608 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:53.177 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:53.177 00:43:53.177 real 0m17.432s 00:43:53.177 user 0m9.189s 00:43:53.177 sys 0m4.948s 00:43:53.177 16:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:53.177 16:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:53.177 ************************************ 00:43:53.177 END TEST kernel_target_abort 00:43:53.177 ************************************ 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:53.177 rmmod nvme_tcp 00:43:53.177 rmmod nvme_fabrics 00:43:53.177 rmmod nvme_keyring 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1317824 ']' 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1317824 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1317824 ']' 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1317824 00:43:53.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1317824) - No such process 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1317824 is not found' 00:43:53.177 Process with pid 1317824 is not found 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:53.177 16:57:23 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:56.467 Waiting for block devices as requested 00:43:56.467 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:56.467 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:56.467 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:56.467 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:56.467 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:56.467 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:56.467 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:56.467 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:56.726 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:56.726 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:56.726 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:56.985 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:56.985 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:56.985 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:56.985 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:57.244 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:57.244 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:57.244 16:57:27 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:57.244 16:57:27 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:57.244 16:57:27 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:43:57.244 16:57:27 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:43:57.244 16:57:27 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:57.244 16:57:27 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:43:57.244 16:57:27 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:57.244 16:57:27 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:57.244 16:57:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:57.244 16:57:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:57.244 16:57:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:59.781 16:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:59.781 00:43:59.781 real 0m48.152s 00:43:59.781 user 1m8.334s 00:43:59.781 sys 0m15.985s 00:43:59.781 16:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:59.781 16:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:59.781 ************************************ 00:43:59.781 END TEST nvmf_abort_qd_sizes 00:43:59.781 ************************************ 00:43:59.781 16:57:29 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:59.781 16:57:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:59.781 16:57:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:59.781 16:57:29 -- common/autotest_common.sh@10 -- # set +x 00:43:59.781 ************************************ 00:43:59.781 START TEST keyring_file 00:43:59.781 ************************************ 00:43:59.781 16:57:29 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:59.781 * Looking for test storage... 00:43:59.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:59.781 16:57:29 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:59.781 16:57:29 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:43:59.781 16:57:29 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:59.781 16:57:29 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:59.781 16:57:29 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:59.781 16:57:29 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:59.781 16:57:29 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:59.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.781 --rc genhtml_branch_coverage=1 00:43:59.781 --rc genhtml_function_coverage=1 00:43:59.781 --rc genhtml_legend=1 00:43:59.781 --rc geninfo_all_blocks=1 00:43:59.781 --rc geninfo_unexecuted_blocks=1 00:43:59.781 00:43:59.781 ' 00:43:59.781 16:57:29 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:59.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.781 --rc genhtml_branch_coverage=1 00:43:59.781 --rc genhtml_function_coverage=1 00:43:59.782 --rc genhtml_legend=1 00:43:59.782 --rc geninfo_all_blocks=1 00:43:59.782 --rc geninfo_unexecuted_blocks=1 00:43:59.782 00:43:59.782 ' 00:43:59.782 16:57:29 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:59.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.782 --rc genhtml_branch_coverage=1 00:43:59.782 --rc genhtml_function_coverage=1 00:43:59.782 --rc genhtml_legend=1 00:43:59.782 --rc geninfo_all_blocks=1 00:43:59.782 --rc geninfo_unexecuted_blocks=1 00:43:59.782 00:43:59.782 ' 00:43:59.782 16:57:29 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:59.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.782 --rc genhtml_branch_coverage=1 00:43:59.782 --rc genhtml_function_coverage=1 00:43:59.782 --rc genhtml_legend=1 00:43:59.782 --rc geninfo_all_blocks=1 00:43:59.782 --rc geninfo_unexecuted_blocks=1 00:43:59.782 00:43:59.782 ' 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:59.782 16:57:29 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:59.782 16:57:29 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:59.782 16:57:29 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:59.782 16:57:29 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:59.782 16:57:29 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.782 16:57:29 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.782 16:57:29 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.782 16:57:29 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:59.782 16:57:29 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@51 -- # : 0 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:59.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FVXzpoDkHk 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FVXzpoDkHk 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FVXzpoDkHk 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FVXzpoDkHk 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.e2EmUHFaqK 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:59.782 16:57:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.e2EmUHFaqK 00:43:59.782 16:57:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.e2EmUHFaqK 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.e2EmUHFaqK 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@30 -- # tgtpid=1326909 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:59.782 16:57:29 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1326909 00:43:59.782 16:57:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1326909 ']' 00:43:59.782 16:57:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:59.782 16:57:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:59.782 16:57:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:59.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:59.782 16:57:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:59.782 16:57:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:59.782 [2024-12-14 16:57:29.766814] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:59.782 [2024-12-14 16:57:29.766864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326909 ] 00:43:59.782 [2024-12-14 16:57:29.842403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:59.782 [2024-12-14 16:57:29.864527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:00.042 16:57:30 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:00.042 16:57:30 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:00.042 16:57:30 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:00.042 16:57:30 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.042 16:57:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:00.042 [2024-12-14 16:57:30.069622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:00.042 null0 00:44:00.042 [2024-12-14 16:57:30.101668] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:00.042 [2024-12-14 16:57:30.101964] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:00.042 16:57:30 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.042 16:57:30 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:00.042 16:57:30 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:00.042 16:57:30 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:00.042 16:57:30 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:00.042 16:57:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:00.042 16:57:30 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:00.301 [2024-12-14 16:57:30.133742] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:00.301 request: 00:44:00.301 { 00:44:00.301 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:00.301 "secure_channel": false, 00:44:00.301 "listen_address": { 00:44:00.301 "trtype": "tcp", 00:44:00.301 "traddr": "127.0.0.1", 00:44:00.301 "trsvcid": "4420" 00:44:00.301 }, 00:44:00.301 "method": "nvmf_subsystem_add_listener", 00:44:00.301 "req_id": 1 00:44:00.301 } 00:44:00.301 Got JSON-RPC error response 00:44:00.301 response: 00:44:00.301 { 00:44:00.301 "code": -32602, 00:44:00.301 "message": "Invalid parameters" 00:44:00.301 } 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:00.301 16:57:30 keyring_file -- keyring/file.sh@47 -- # bperfpid=1326922 00:44:00.301 16:57:30 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1326922 /var/tmp/bperf.sock 00:44:00.301 16:57:30 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1326922 ']' 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:00.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:00.301 [2024-12-14 16:57:30.188026] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:00.301 [2024-12-14 16:57:30.188068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326922 ] 00:44:00.301 [2024-12-14 16:57:30.263947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:00.301 [2024-12-14 16:57:30.286463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:00.301 16:57:30 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:00.301 16:57:30 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FVXzpoDkHk 00:44:00.301 16:57:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FVXzpoDkHk 00:44:00.560 16:57:30 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.e2EmUHFaqK 00:44:00.560 16:57:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.e2EmUHFaqK 00:44:00.819 16:57:30 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:00.819 16:57:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:00.819 16:57:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:00.819 16:57:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:00.819 16:57:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.076 16:57:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FVXzpoDkHk == \/\t\m\p\/\t\m\p\.\F\V\X\z\p\o\D\k\H\k ]] 00:44:01.076 16:57:30 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:01.076 16:57:30 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:01.076 16:57:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:01.076 16:57:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:01.076 16:57:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.076 16:57:31 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.e2EmUHFaqK == \/\t\m\p\/\t\m\p\.\e\2\E\m\U\H\F\a\q\K ]] 00:44:01.076 16:57:31 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:01.076 16:57:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:01.076 16:57:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:01.076 16:57:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:01.076 16:57:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:01.076 16:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.334 16:57:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:01.334 16:57:31 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:01.334 16:57:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:01.334 16:57:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:01.334 16:57:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:01.334 16:57:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:01.334 16:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.592 16:57:31 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:01.592 16:57:31 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:01.592 16:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:01.851 [2024-12-14 16:57:31.704790] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:01.851 nvme0n1 00:44:01.851 16:57:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:01.851 16:57:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:01.851 16:57:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:01.851 16:57:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:01.851 16:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.851 16:57:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:02.110 16:57:31 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:02.110 16:57:31 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:02.110 16:57:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:02.110 16:57:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:02.110 16:57:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:02.110 16:57:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:02.110 16:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:02.110 16:57:32 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:02.110 16:57:32 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:02.369 Running I/O for 1 seconds... 00:44:03.306 19299.00 IOPS, 75.39 MiB/s 00:44:03.306 Latency(us) 00:44:03.306 [2024-12-14T15:57:33.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:03.306 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:03.306 nvme0n1 : 1.00 19345.79 75.57 0.00 0.00 6603.88 2621.44 17975.59 00:44:03.306 [2024-12-14T15:57:33.392Z] =================================================================================================================== 00:44:03.306 [2024-12-14T15:57:33.392Z] Total : 19345.79 75.57 0.00 0.00 6603.88 2621.44 17975.59 00:44:03.306 { 00:44:03.306 "results": [ 00:44:03.306 { 00:44:03.306 "job": "nvme0n1", 00:44:03.306 "core_mask": "0x2", 00:44:03.306 "workload": "randrw", 00:44:03.306 "percentage": 50, 00:44:03.306 "status": "finished", 00:44:03.306 "queue_depth": 128, 00:44:03.306 "io_size": 4096, 00:44:03.306 "runtime": 1.004301, 00:44:03.306 "iops": 19345.793741119447, 00:44:03.306 "mibps": 75.56950680124784, 00:44:03.306 "io_failed": 0, 00:44:03.306 "io_timeout": 0, 00:44:03.306 "avg_latency_us": 6603.877692501882, 00:44:03.306 "min_latency_us": 2621.44, 00:44:03.306 "max_latency_us": 17975.588571428572 00:44:03.306 } 00:44:03.306 ], 00:44:03.306 "core_count": 1 00:44:03.306 } 00:44:03.306 16:57:33 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:03.306 16:57:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:03.565 16:57:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:03.565 16:57:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:03.565 16:57:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:03.565 16:57:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:03.565 16:57:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:03.565 16:57:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:03.824 16:57:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:03.824 16:57:33 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:03.824 16:57:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:03.824 16:57:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:03.824 16:57:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:03.824 16:57:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:03.824 16:57:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:03.824 16:57:33 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:03.824 16:57:33 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:03.824 16:57:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:03.824 16:57:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:03.824 16:57:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:03.824 16:57:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:03.824 16:57:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:03.824 16:57:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:03.824 16:57:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:03.824 16:57:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:04.083 [2024-12-14 16:57:34.073786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:04.083 [2024-12-14 16:57:34.074211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1add6a0 (107): Transport endpoint is not connected 00:44:04.083 [2024-12-14 16:57:34.075206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1add6a0 (9): Bad file descriptor 00:44:04.083 [2024-12-14 16:57:34.076208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:04.083 [2024-12-14 16:57:34.076220] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:04.083 [2024-12-14 16:57:34.076227] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:04.083 [2024-12-14 16:57:34.076236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:04.083 request: 00:44:04.083 { 00:44:04.083 "name": "nvme0", 00:44:04.083 "trtype": "tcp", 00:44:04.083 "traddr": "127.0.0.1", 00:44:04.083 "adrfam": "ipv4", 00:44:04.083 "trsvcid": "4420", 00:44:04.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:04.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:04.083 "prchk_reftag": false, 00:44:04.083 "prchk_guard": false, 00:44:04.083 "hdgst": false, 00:44:04.083 "ddgst": false, 00:44:04.083 "psk": "key1", 00:44:04.083 "allow_unrecognized_csi": false, 00:44:04.083 "method": "bdev_nvme_attach_controller", 00:44:04.083 "req_id": 1 00:44:04.083 } 00:44:04.083 Got JSON-RPC error response 00:44:04.083 response: 00:44:04.083 { 00:44:04.083 "code": -5, 00:44:04.083 "message": "Input/output error" 00:44:04.083 } 00:44:04.083 16:57:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:04.083 16:57:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:04.083 16:57:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:04.083 16:57:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:04.083 16:57:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:04.083 16:57:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:04.083 16:57:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:04.083 16:57:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:04.083 16:57:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:04.084 16:57:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:04.342 16:57:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:04.342 16:57:34 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:04.342 16:57:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:04.342 16:57:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:04.342 16:57:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:04.342 16:57:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:04.342 16:57:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:04.601 16:57:34 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:04.601 16:57:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:04.601 16:57:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:04.601 16:57:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:04.601 16:57:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:04.876 16:57:34 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:04.876 16:57:34 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:04.876 16:57:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:05.135 16:57:35 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:05.135 16:57:35 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.FVXzpoDkHk 00:44:05.135 16:57:35 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FVXzpoDkHk 00:44:05.135 16:57:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:05.135 16:57:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FVXzpoDkHk 00:44:05.135 16:57:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:05.135 16:57:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:05.135 16:57:35 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:05.135 16:57:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:05.135 16:57:35 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FVXzpoDkHk 00:44:05.135 16:57:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FVXzpoDkHk 00:44:05.394 [2024-12-14 16:57:35.226287] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FVXzpoDkHk': 0100660 00:44:05.394 [2024-12-14 16:57:35.226312] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:05.394 request: 00:44:05.394 { 00:44:05.394 "name": "key0", 00:44:05.394 "path": "/tmp/tmp.FVXzpoDkHk", 00:44:05.394 "method": "keyring_file_add_key", 00:44:05.394 "req_id": 1 00:44:05.394 } 00:44:05.394 Got JSON-RPC error response 00:44:05.394 response: 00:44:05.394 { 00:44:05.394 "code": -1, 00:44:05.394 "message": "Operation not permitted" 00:44:05.394 } 00:44:05.394 16:57:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:05.394 16:57:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:05.394 16:57:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:05.394 16:57:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:05.394 16:57:35 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.FVXzpoDkHk 00:44:05.394 16:57:35 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FVXzpoDkHk 00:44:05.394 16:57:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FVXzpoDkHk 00:44:05.394 16:57:35 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.FVXzpoDkHk 00:44:05.394 16:57:35 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:05.394 16:57:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:05.394 16:57:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:05.394 16:57:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:05.394 16:57:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:05.394 16:57:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:05.652 16:57:35 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:05.652 16:57:35 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:05.652 16:57:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:05.652 16:57:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:05.652 16:57:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:05.652 16:57:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:05.652 16:57:35 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:05.652 16:57:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:05.652 16:57:35 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:05.652 16:57:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:05.911 [2024-12-14 16:57:35.815855] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FVXzpoDkHk': No such file or directory 00:44:05.911 [2024-12-14 16:57:35.815879] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:05.911 [2024-12-14 16:57:35.815895] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:05.911 [2024-12-14 16:57:35.815902] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:05.911 [2024-12-14 16:57:35.815909] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:05.911 [2024-12-14 16:57:35.815915] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:05.911 request: 00:44:05.911 { 00:44:05.911 "name": "nvme0", 00:44:05.911 "trtype": "tcp", 00:44:05.911 "traddr": "127.0.0.1", 00:44:05.911 "adrfam": "ipv4", 00:44:05.911 "trsvcid": "4420", 00:44:05.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:05.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:05.911 "prchk_reftag": false, 00:44:05.911 "prchk_guard": false, 00:44:05.911 "hdgst": false, 00:44:05.911 "ddgst": false, 00:44:05.911 "psk": "key0", 00:44:05.911 "allow_unrecognized_csi": false, 00:44:05.911 "method": "bdev_nvme_attach_controller", 00:44:05.911 "req_id": 1 00:44:05.911 } 00:44:05.911 Got JSON-RPC error response 00:44:05.911 response: 00:44:05.911 { 00:44:05.911 "code": -19, 00:44:05.911 "message": "No such device" 00:44:05.911 } 00:44:05.911 16:57:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:05.911 16:57:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:05.911 16:57:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:05.911 16:57:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:05.911 16:57:35 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:05.911 16:57:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:06.234 16:57:36 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:06.234 16:57:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:06.234 16:57:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:06.234 16:57:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:06.234 16:57:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:06.234 16:57:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:06.234 16:57:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dg6TC5XTS7 00:44:06.234 16:57:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:06.234 16:57:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:06.234 16:57:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:06.234 16:57:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:06.234 16:57:36 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:06.234 16:57:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:06.234 16:57:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:06.234 16:57:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dg6TC5XTS7 00:44:06.234 16:57:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dg6TC5XTS7 00:44:06.234 16:57:36 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.dg6TC5XTS7 00:44:06.234 16:57:36 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dg6TC5XTS7 00:44:06.234 16:57:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dg6TC5XTS7 00:44:06.234 16:57:36 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:06.234 16:57:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:06.578 nvme0n1 00:44:06.578 16:57:36 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:06.578 16:57:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:06.578 16:57:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:06.578 16:57:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:06.578 16:57:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:06.578 16:57:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:06.837 16:57:36 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:06.837 16:57:36 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:06.837 16:57:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:07.096 16:57:36 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:07.096 16:57:36 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:07.096 16:57:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:07.096 16:57:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:07.096 16:57:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:07.096 16:57:37 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:07.096 16:57:37 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:07.096 16:57:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:07.096 16:57:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:07.096 16:57:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:07.096 16:57:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:07.096 16:57:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:07.355 16:57:37 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:07.355 16:57:37 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:07.355 16:57:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:07.614 16:57:37 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:07.614 16:57:37 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:07.614 16:57:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:07.873 16:57:37 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:07.873 16:57:37 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dg6TC5XTS7 00:44:07.873 16:57:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dg6TC5XTS7 00:44:07.873 16:57:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.e2EmUHFaqK 00:44:07.873 16:57:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.e2EmUHFaqK 00:44:08.132 16:57:38 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:08.132 16:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:08.391 nvme0n1 00:44:08.391 16:57:38 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:08.391 16:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:08.649 16:57:38 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:08.649 "subsystems": [ 00:44:08.649 { 00:44:08.649 "subsystem": "keyring", 00:44:08.649 "config": [ 00:44:08.649 { 00:44:08.649 "method": "keyring_file_add_key", 00:44:08.649 "params": { 00:44:08.649 "name": "key0", 00:44:08.649 "path": "/tmp/tmp.dg6TC5XTS7" 00:44:08.649 } 00:44:08.649 }, 00:44:08.649 { 00:44:08.649 "method": "keyring_file_add_key", 00:44:08.649 "params": { 00:44:08.649 "name": "key1", 00:44:08.649 "path": "/tmp/tmp.e2EmUHFaqK" 00:44:08.649 } 00:44:08.649 } 00:44:08.649 ] 00:44:08.649 }, 00:44:08.649 { 00:44:08.649 "subsystem": "iobuf", 00:44:08.649 "config": [ 00:44:08.649 { 00:44:08.649 "method": "iobuf_set_options", 00:44:08.649 "params": { 00:44:08.649 "small_pool_count": 8192, 00:44:08.649 "large_pool_count": 1024, 00:44:08.649 "small_bufsize": 8192, 00:44:08.649 "large_bufsize": 135168, 00:44:08.649 "enable_numa": false 00:44:08.649 } 00:44:08.649 } 00:44:08.649 ] 00:44:08.649 }, 00:44:08.649 { 00:44:08.649 "subsystem": "sock", 00:44:08.649 "config": [ 00:44:08.649 { 00:44:08.649 "method": "sock_set_default_impl", 00:44:08.649 "params": { 00:44:08.649 "impl_name": "posix" 00:44:08.649 } 00:44:08.649 }, 00:44:08.649 { 00:44:08.649 "method": "sock_impl_set_options", 00:44:08.649 "params": { 00:44:08.649 "impl_name": "ssl", 00:44:08.649 "recv_buf_size": 4096, 00:44:08.649 "send_buf_size": 4096, 00:44:08.649 "enable_recv_pipe": true, 00:44:08.649 "enable_quickack": false, 00:44:08.649 "enable_placement_id": 0, 00:44:08.649 "enable_zerocopy_send_server": true, 00:44:08.649 "enable_zerocopy_send_client": false, 00:44:08.649 "zerocopy_threshold": 0, 00:44:08.649 "tls_version": 0, 00:44:08.649 "enable_ktls": false 00:44:08.649 } 00:44:08.649 }, 00:44:08.649 { 00:44:08.649 "method": "sock_impl_set_options", 00:44:08.649 "params": { 00:44:08.649 "impl_name": "posix", 00:44:08.649 "recv_buf_size": 2097152, 00:44:08.649 "send_buf_size": 2097152, 00:44:08.649 "enable_recv_pipe": true, 00:44:08.649 "enable_quickack": false, 00:44:08.649 "enable_placement_id": 0, 00:44:08.649 "enable_zerocopy_send_server": true, 00:44:08.649 "enable_zerocopy_send_client": false, 00:44:08.649 "zerocopy_threshold": 0, 00:44:08.649 "tls_version": 0, 00:44:08.649 "enable_ktls": false 00:44:08.649 } 00:44:08.649 } 00:44:08.649 ] 00:44:08.649 }, 00:44:08.649 { 00:44:08.649 "subsystem": "vmd", 00:44:08.649 "config": [] 00:44:08.649 }, 00:44:08.649 { 00:44:08.649 "subsystem": "accel", 00:44:08.649 "config": [ 00:44:08.649 { 00:44:08.649 "method": "accel_set_options", 00:44:08.649 "params": { 00:44:08.649 "small_cache_size": 128, 00:44:08.649 "large_cache_size": 16, 00:44:08.649 "task_count": 2048, 00:44:08.649 "sequence_count": 2048, 00:44:08.649 "buf_count": 2048 00:44:08.649 } 00:44:08.649 } 00:44:08.649 ] 00:44:08.649 }, 00:44:08.649 { 00:44:08.649 "subsystem": "bdev", 00:44:08.649 "config": [ 00:44:08.649 { 00:44:08.649 "method": "bdev_set_options", 00:44:08.649 "params": { 00:44:08.649 "bdev_io_pool_size": 65535, 00:44:08.650 "bdev_io_cache_size": 256, 00:44:08.650 "bdev_auto_examine": true, 00:44:08.650 "iobuf_small_cache_size": 128, 00:44:08.650 "iobuf_large_cache_size": 16 00:44:08.650 } 00:44:08.650 }, 00:44:08.650 { 00:44:08.650 "method": "bdev_raid_set_options", 00:44:08.650 "params": { 00:44:08.650 "process_window_size_kb": 1024, 00:44:08.650 "process_max_bandwidth_mb_sec": 0 00:44:08.650 } 00:44:08.650 }, 00:44:08.650 { 00:44:08.650 "method": "bdev_iscsi_set_options", 00:44:08.650 "params": { 00:44:08.650 "timeout_sec": 30 00:44:08.650 } 00:44:08.650 }, 00:44:08.650 { 00:44:08.650 "method": "bdev_nvme_set_options", 00:44:08.650 "params": { 00:44:08.650 "action_on_timeout": "none", 00:44:08.650 "timeout_us": 0, 00:44:08.650 "timeout_admin_us": 0, 00:44:08.650 "keep_alive_timeout_ms": 10000, 00:44:08.650 "arbitration_burst": 0, 00:44:08.650 "low_priority_weight": 0, 00:44:08.650 "medium_priority_weight": 0, 00:44:08.650 "high_priority_weight": 0, 00:44:08.650 "nvme_adminq_poll_period_us": 10000, 00:44:08.650 "nvme_ioq_poll_period_us": 0, 00:44:08.650 "io_queue_requests": 512, 00:44:08.650 "delay_cmd_submit": true, 00:44:08.650 "transport_retry_count": 4, 00:44:08.650 "bdev_retry_count": 3, 00:44:08.650 "transport_ack_timeout": 0, 00:44:08.650 "ctrlr_loss_timeout_sec": 0, 00:44:08.650 "reconnect_delay_sec": 0, 00:44:08.650 "fast_io_fail_timeout_sec": 0, 00:44:08.650 "disable_auto_failback": false, 00:44:08.650 "generate_uuids": false, 00:44:08.650 "transport_tos": 0, 00:44:08.650 "nvme_error_stat": false, 00:44:08.650 "rdma_srq_size": 0, 00:44:08.650 "io_path_stat": false, 00:44:08.650 "allow_accel_sequence": false, 00:44:08.650 "rdma_max_cq_size": 0, 00:44:08.650 "rdma_cm_event_timeout_ms": 0, 00:44:08.650 "dhchap_digests": [ 00:44:08.650 "sha256", 00:44:08.650 "sha384", 00:44:08.650 "sha512" 00:44:08.650 ], 00:44:08.650 "dhchap_dhgroups": [ 00:44:08.650 "null", 00:44:08.650 "ffdhe2048", 00:44:08.650 "ffdhe3072", 00:44:08.650 "ffdhe4096", 00:44:08.650 "ffdhe6144", 00:44:08.650 "ffdhe8192" 00:44:08.650 ], 00:44:08.650 "rdma_umr_per_io": false 00:44:08.650 } 00:44:08.650 }, 00:44:08.650 { 00:44:08.650 "method": "bdev_nvme_attach_controller", 00:44:08.650 "params": { 00:44:08.650 "name": "nvme0", 00:44:08.650 "trtype": "TCP", 00:44:08.650 "adrfam": "IPv4", 00:44:08.650 "traddr": "127.0.0.1", 00:44:08.650 "trsvcid": "4420", 00:44:08.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:08.650 "prchk_reftag": false, 00:44:08.650 "prchk_guard": false, 00:44:08.650 "ctrlr_loss_timeout_sec": 0, 00:44:08.650 "reconnect_delay_sec": 0, 00:44:08.650 "fast_io_fail_timeout_sec": 0, 00:44:08.650 "psk": "key0", 00:44:08.650 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:08.650 "hdgst": false, 00:44:08.650 "ddgst": false, 00:44:08.650 "multipath": "multipath" 00:44:08.650 } 00:44:08.650 }, 00:44:08.650 { 00:44:08.650 "method": "bdev_nvme_set_hotplug", 00:44:08.650 "params": { 00:44:08.650 "period_us": 100000, 00:44:08.650 "enable": false 00:44:08.650 } 00:44:08.650 }, 00:44:08.650 { 00:44:08.650 "method": "bdev_wait_for_examine" 00:44:08.650 } 00:44:08.650 ] 00:44:08.650 }, 00:44:08.650 { 00:44:08.650 "subsystem": "nbd", 00:44:08.650 "config": [] 00:44:08.650 } 00:44:08.650 ] 00:44:08.650 }' 00:44:08.650 16:57:38 keyring_file -- keyring/file.sh@115 -- # killprocess 1326922 00:44:08.650 16:57:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1326922 ']' 00:44:08.650 16:57:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1326922 00:44:08.650 16:57:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:08.650 16:57:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:08.650 16:57:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1326922 00:44:08.650 16:57:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:08.650 16:57:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:08.650 16:57:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1326922' 00:44:08.650 killing process with pid 1326922 00:44:08.650 16:57:38 keyring_file -- common/autotest_common.sh@973 -- # kill 1326922 00:44:08.650 Received shutdown signal, test time was about 1.000000 seconds 00:44:08.650 00:44:08.650 Latency(us) 00:44:08.650 [2024-12-14T15:57:38.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:08.650 [2024-12-14T15:57:38.736Z] =================================================================================================================== 00:44:08.650 [2024-12-14T15:57:38.736Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:08.650 16:57:38 keyring_file -- common/autotest_common.sh@978 -- # wait 1326922 00:44:08.926 16:57:38 keyring_file -- keyring/file.sh@118 -- # bperfpid=1328400 00:44:08.926 16:57:38 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1328400 /var/tmp/bperf.sock 00:44:08.926 16:57:38 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1328400 ']' 00:44:08.926 16:57:38 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:08.926 16:57:38 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:08.926 16:57:38 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:08.926 16:57:38 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:08.926 "subsystems": [ 00:44:08.926 { 00:44:08.926 "subsystem": "keyring", 00:44:08.926 "config": [ 00:44:08.926 { 00:44:08.926 "method": "keyring_file_add_key", 00:44:08.926 "params": { 00:44:08.926 "name": "key0", 00:44:08.926 "path": "/tmp/tmp.dg6TC5XTS7" 00:44:08.926 } 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "method": "keyring_file_add_key", 00:44:08.926 "params": { 00:44:08.926 "name": "key1", 00:44:08.926 "path": "/tmp/tmp.e2EmUHFaqK" 00:44:08.926 } 00:44:08.926 } 00:44:08.926 ] 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "subsystem": "iobuf", 00:44:08.926 "config": [ 00:44:08.926 { 00:44:08.926 "method": "iobuf_set_options", 00:44:08.926 "params": { 00:44:08.926 "small_pool_count": 8192, 00:44:08.926 "large_pool_count": 1024, 00:44:08.926 "small_bufsize": 8192, 00:44:08.926 "large_bufsize": 135168, 00:44:08.926 "enable_numa": false 00:44:08.926 } 00:44:08.926 } 00:44:08.926 ] 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "subsystem": "sock", 00:44:08.926 "config": [ 00:44:08.926 { 00:44:08.926 "method": "sock_set_default_impl", 00:44:08.926 "params": { 00:44:08.926 "impl_name": "posix" 00:44:08.926 } 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "method": "sock_impl_set_options", 00:44:08.926 "params": { 00:44:08.926 "impl_name": "ssl", 00:44:08.926 "recv_buf_size": 4096, 00:44:08.926 "send_buf_size": 4096, 00:44:08.926 "enable_recv_pipe": true, 00:44:08.926 "enable_quickack": false, 00:44:08.926 "enable_placement_id": 0, 00:44:08.926 "enable_zerocopy_send_server": true, 00:44:08.926 "enable_zerocopy_send_client": false, 00:44:08.926 "zerocopy_threshold": 0, 00:44:08.926 "tls_version": 0, 00:44:08.926 "enable_ktls": false 00:44:08.926 } 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "method": "sock_impl_set_options", 00:44:08.926 "params": { 00:44:08.926 "impl_name": "posix", 00:44:08.926 "recv_buf_size": 2097152, 00:44:08.926 "send_buf_size": 2097152, 00:44:08.926 "enable_recv_pipe": true, 00:44:08.926 "enable_quickack": false, 00:44:08.926 "enable_placement_id": 0, 00:44:08.926 "enable_zerocopy_send_server": true, 00:44:08.926 "enable_zerocopy_send_client": false, 00:44:08.926 "zerocopy_threshold": 0, 00:44:08.926 "tls_version": 0, 00:44:08.926 "enable_ktls": false 00:44:08.926 } 00:44:08.926 } 00:44:08.926 ] 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "subsystem": "vmd", 00:44:08.926 "config": [] 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "subsystem": "accel", 00:44:08.926 "config": [ 00:44:08.926 { 00:44:08.926 "method": "accel_set_options", 00:44:08.926 "params": { 00:44:08.926 "small_cache_size": 128, 00:44:08.926 "large_cache_size": 16, 00:44:08.926 "task_count": 2048, 00:44:08.926 "sequence_count": 2048, 00:44:08.926 "buf_count": 2048 00:44:08.926 } 00:44:08.926 } 00:44:08.926 ] 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "subsystem": "bdev", 00:44:08.926 "config": [ 00:44:08.926 { 00:44:08.926 "method": "bdev_set_options", 00:44:08.926 "params": { 00:44:08.926 "bdev_io_pool_size": 65535, 00:44:08.926 "bdev_io_cache_size": 256, 00:44:08.926 "bdev_auto_examine": true, 00:44:08.926 "iobuf_small_cache_size": 128, 00:44:08.926 "iobuf_large_cache_size": 16 00:44:08.926 } 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "method": "bdev_raid_set_options", 00:44:08.926 "params": { 00:44:08.926 "process_window_size_kb": 1024, 00:44:08.926 "process_max_bandwidth_mb_sec": 0 00:44:08.926 } 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "method": "bdev_iscsi_set_options", 00:44:08.926 "params": { 00:44:08.926 "timeout_sec": 30 00:44:08.926 } 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "method": "bdev_nvme_set_options", 00:44:08.926 "params": { 00:44:08.926 "action_on_timeout": "none", 00:44:08.926 "timeout_us": 0, 00:44:08.926 "timeout_admin_us": 0, 00:44:08.926 "keep_alive_timeout_ms": 10000, 00:44:08.926 "arbitration_burst": 0, 00:44:08.926 "low_priority_weight": 0, 00:44:08.926 "medium_priority_weight": 0, 00:44:08.926 "high_priority_weight": 0, 00:44:08.926 "nvme_adminq_poll_period_us": 10000, 00:44:08.926 "nvme_ioq_poll_period_us": 0, 00:44:08.926 "io_queue_requests": 512, 00:44:08.926 "delay_cmd_submit": true, 00:44:08.926 "transport_retry_count": 4, 00:44:08.926 "bdev_retry_count": 3, 00:44:08.926 "transport_ack_timeout": 0, 00:44:08.926 "ctrlr_loss_timeout_sec": 0, 00:44:08.926 "reconnect_delay_sec": 0, 00:44:08.926 "fast_io_fail_timeout_sec": 0, 00:44:08.926 "disable_auto_failback": false, 00:44:08.926 "generate_uuids": false, 00:44:08.926 "transport_tos": 0, 00:44:08.926 "nvme_error_stat": false, 00:44:08.926 "rdma_srq_size": 0, 00:44:08.926 16:57:38 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:08.926 "io_path_stat": false, 00:44:08.926 "allow_accel_sequence": false, 00:44:08.926 "rdma_max_cq_size": 0, 00:44:08.926 "rdma_cm_event_timeout_ms": 0, 00:44:08.926 "dhchap_digests": [ 00:44:08.926 "sha256", 00:44:08.926 "sha384", 00:44:08.926 "sha512" 00:44:08.926 ], 00:44:08.926 "dhchap_dhgroups": [ 00:44:08.926 "null", 00:44:08.926 "ffdhe2048", 00:44:08.926 "ffdhe3072", 00:44:08.926 "ffdhe4096", 00:44:08.926 "ffdhe6144", 00:44:08.926 "ffdhe8192" 00:44:08.926 ], 00:44:08.926 "rdma_umr_per_io": false 00:44:08.926 } 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "method": "bdev_nvme_attach_controller", 00:44:08.926 "params": { 00:44:08.926 "name": "nvme0", 00:44:08.926 "trtype": "TCP", 00:44:08.926 "adrfam": "IPv4", 00:44:08.926 "traddr": "127.0.0.1", 00:44:08.926 "trsvcid": "4420", 00:44:08.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:08.926 "prchk_reftag": false, 00:44:08.926 "prchk_guard": false, 00:44:08.926 "ctrlr_loss_timeout_sec": 0, 00:44:08.926 "reconnect_delay_sec": 0, 00:44:08.926 "fast_io_fail_timeout_sec": 0, 00:44:08.926 "psk": "key0", 00:44:08.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:08.926 "hdgst": false, 00:44:08.926 "ddgst": false, 00:44:08.926 "multipath": "multipath" 00:44:08.926 } 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "method": "bdev_nvme_set_hotplug", 00:44:08.926 "params": { 00:44:08.926 "period_us": 100000, 00:44:08.926 "enable": false 00:44:08.926 } 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "method": "bdev_wait_for_examine" 00:44:08.926 } 00:44:08.926 ] 00:44:08.926 }, 00:44:08.926 { 00:44:08.926 "subsystem": "nbd", 00:44:08.926 "config": [] 00:44:08.926 } 00:44:08.926 ] 00:44:08.926 }' 00:44:08.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:08.926 16:57:38 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:08.926 16:57:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:08.926 [2024-12-14 16:57:38.899108] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:08.926 [2024-12-14 16:57:38.899157] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328400 ] 00:44:08.926 [2024-12-14 16:57:38.971339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:08.926 [2024-12-14 16:57:38.993663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:09.185 [2024-12-14 16:57:39.149056] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:09.753 16:57:39 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:09.753 16:57:39 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:09.753 16:57:39 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:09.753 16:57:39 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:09.753 16:57:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.012 16:57:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:10.012 16:57:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:10.012 16:57:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:10.012 16:57:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:10.012 16:57:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:10.012 16:57:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:10.012 16:57:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.270 16:57:40 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:10.270 16:57:40 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:10.270 16:57:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:10.270 16:57:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:10.270 16:57:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:10.270 16:57:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:10.270 16:57:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.270 16:57:40 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:10.270 16:57:40 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:10.270 16:57:40 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:10.270 16:57:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:10.529 16:57:40 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:10.529 16:57:40 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:10.529 16:57:40 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dg6TC5XTS7 /tmp/tmp.e2EmUHFaqK 00:44:10.529 16:57:40 keyring_file -- keyring/file.sh@20 -- # killprocess 1328400 00:44:10.529 16:57:40 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1328400 ']' 00:44:10.529 16:57:40 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1328400 00:44:10.529 16:57:40 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:10.529 16:57:40 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:10.529 16:57:40 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1328400 00:44:10.529 16:57:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:10.529 16:57:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:10.529 16:57:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1328400' 00:44:10.529 killing process with pid 1328400 00:44:10.529 16:57:40 keyring_file -- common/autotest_common.sh@973 -- # kill 1328400 00:44:10.529 Received shutdown signal, test time was about 1.000000 seconds 00:44:10.529 00:44:10.529 Latency(us) 00:44:10.529 [2024-12-14T15:57:40.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:10.529 [2024-12-14T15:57:40.615Z] =================================================================================================================== 00:44:10.529 [2024-12-14T15:57:40.615Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:10.529 16:57:40 keyring_file -- common/autotest_common.sh@978 -- # wait 1328400 00:44:10.788 16:57:40 keyring_file -- keyring/file.sh@21 -- # killprocess 1326909 00:44:10.788 16:57:40 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1326909 ']' 00:44:10.788 16:57:40 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1326909 00:44:10.788 16:57:40 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:10.788 16:57:40 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:10.788 16:57:40 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1326909 00:44:10.788 16:57:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:10.788 16:57:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:10.788 16:57:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1326909' 00:44:10.788 killing process with pid 1326909 00:44:10.788 16:57:40 keyring_file -- common/autotest_common.sh@973 -- # kill 1326909 00:44:10.788 16:57:40 keyring_file -- common/autotest_common.sh@978 -- # wait 1326909 00:44:11.047 00:44:11.047 real 0m11.690s 00:44:11.047 user 0m29.202s 00:44:11.047 sys 0m2.647s 00:44:11.047 16:57:41 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:11.047 16:57:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:11.047 ************************************ 00:44:11.047 END TEST keyring_file 00:44:11.047 ************************************ 00:44:11.306 16:57:41 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:11.306 16:57:41 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:11.306 16:57:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:11.306 16:57:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:11.306 16:57:41 -- common/autotest_common.sh@10 -- # set +x 00:44:11.306 ************************************ 00:44:11.306 START TEST keyring_linux 00:44:11.306 ************************************ 00:44:11.306 16:57:41 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:11.306 Joined session keyring: 527852287 00:44:11.306 * Looking for test storage... 00:44:11.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:11.306 16:57:41 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:11.306 16:57:41 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:11.306 16:57:41 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:11.306 16:57:41 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:11.306 16:57:41 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:11.307 16:57:41 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:11.307 16:57:41 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:11.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.307 --rc genhtml_branch_coverage=1 00:44:11.307 --rc genhtml_function_coverage=1 00:44:11.307 --rc genhtml_legend=1 00:44:11.307 --rc geninfo_all_blocks=1 00:44:11.307 --rc geninfo_unexecuted_blocks=1 00:44:11.307 00:44:11.307 ' 00:44:11.307 16:57:41 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:11.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.307 --rc genhtml_branch_coverage=1 00:44:11.307 --rc genhtml_function_coverage=1 00:44:11.307 --rc genhtml_legend=1 00:44:11.307 --rc geninfo_all_blocks=1 00:44:11.307 --rc geninfo_unexecuted_blocks=1 00:44:11.307 00:44:11.307 ' 00:44:11.307 16:57:41 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:11.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.307 --rc genhtml_branch_coverage=1 00:44:11.307 --rc genhtml_function_coverage=1 00:44:11.307 --rc genhtml_legend=1 00:44:11.307 --rc geninfo_all_blocks=1 00:44:11.307 --rc geninfo_unexecuted_blocks=1 00:44:11.307 00:44:11.307 ' 00:44:11.307 16:57:41 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:11.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.307 --rc genhtml_branch_coverage=1 00:44:11.307 --rc genhtml_function_coverage=1 00:44:11.307 --rc genhtml_legend=1 00:44:11.307 --rc geninfo_all_blocks=1 00:44:11.307 --rc geninfo_unexecuted_blocks=1 00:44:11.307 00:44:11.307 ' 00:44:11.307 16:57:41 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:11.307 16:57:41 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:11.307 16:57:41 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:11.307 16:57:41 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.307 16:57:41 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.307 16:57:41 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.307 16:57:41 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:11.307 16:57:41 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:11.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:11.307 16:57:41 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:11.307 16:57:41 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:11.307 16:57:41 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:11.307 16:57:41 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:11.307 16:57:41 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:11.307 16:57:41 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:11.307 16:57:41 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:11.307 16:57:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:11.307 16:57:41 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:11.307 16:57:41 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:11.307 16:57:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:11.307 16:57:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:11.307 16:57:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:11.307 16:57:41 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:11.566 16:57:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:11.566 16:57:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:11.566 /tmp/:spdk-test:key0 00:44:11.566 16:57:41 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:11.566 16:57:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:11.566 16:57:41 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:11.566 16:57:41 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:11.566 16:57:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:11.566 16:57:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:11.566 16:57:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:11.566 16:57:41 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:11.566 16:57:41 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:11.566 16:57:41 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:11.566 16:57:41 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:11.566 16:57:41 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:11.566 16:57:41 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:11.566 16:57:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:11.566 16:57:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:11.566 /tmp/:spdk-test:key1 00:44:11.566 16:57:41 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1328943 00:44:11.566 16:57:41 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1328943 00:44:11.566 16:57:41 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:11.566 16:57:41 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1328943 ']' 00:44:11.566 16:57:41 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:11.566 16:57:41 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:11.566 16:57:41 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:11.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:11.566 16:57:41 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:11.566 16:57:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:11.566 [2024-12-14 16:57:41.510670] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:11.566 [2024-12-14 16:57:41.510723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328943 ] 00:44:11.566 [2024-12-14 16:57:41.587232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:11.566 [2024-12-14 16:57:41.609283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:11.825 16:57:41 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:11.825 16:57:41 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:11.825 16:57:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:11.825 16:57:41 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.825 16:57:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:11.825 [2024-12-14 16:57:41.825569] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:11.825 null0 00:44:11.825 [2024-12-14 16:57:41.857621] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:11.825 [2024-12-14 16:57:41.857933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:11.825 16:57:41 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.825 16:57:41 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:11.825 217242555 00:44:11.825 16:57:41 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:11.825 732749813 00:44:11.825 16:57:41 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1328951 00:44:11.825 16:57:41 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1328951 /var/tmp/bperf.sock 00:44:11.825 16:57:41 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:11.825 16:57:41 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1328951 ']' 00:44:11.825 16:57:41 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:11.825 16:57:41 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:11.825 16:57:41 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:11.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:11.825 16:57:41 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:11.825 16:57:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:12.085 [2024-12-14 16:57:41.928506] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:12.085 [2024-12-14 16:57:41.928549] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328951 ] 00:44:12.085 [2024-12-14 16:57:42.002505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:12.085 [2024-12-14 16:57:42.024903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:12.085 16:57:42 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:12.085 16:57:42 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:12.085 16:57:42 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:12.085 16:57:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:12.344 16:57:42 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:12.344 16:57:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:12.603 16:57:42 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:12.603 16:57:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:12.603 [2024-12-14 16:57:42.664011] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:12.862 nvme0n1 00:44:12.862 16:57:42 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:12.862 16:57:42 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:12.862 16:57:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:12.862 16:57:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:12.862 16:57:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:12.862 16:57:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:13.121 16:57:42 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:13.121 16:57:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:13.121 16:57:42 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:13.121 16:57:42 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:13.121 16:57:42 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:13.121 16:57:42 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:13.121 16:57:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:13.121 16:57:43 keyring_linux -- keyring/linux.sh@25 -- # sn=217242555 00:44:13.121 16:57:43 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:13.121 16:57:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:13.121 16:57:43 keyring_linux -- keyring/linux.sh@26 -- # [[ 217242555 == \2\1\7\2\4\2\5\5\5 ]] 00:44:13.121 16:57:43 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 217242555 00:44:13.121 16:57:43 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:13.121 16:57:43 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:13.380 Running I/O for 1 seconds... 00:44:14.314 21187.00 IOPS, 82.76 MiB/s 00:44:14.315 Latency(us) 00:44:14.315 [2024-12-14T15:57:44.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:14.315 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:14.315 nvme0n1 : 1.01 21188.32 82.77 0.00 0.00 6020.98 4899.60 12732.71 00:44:14.315 [2024-12-14T15:57:44.401Z] =================================================================================================================== 00:44:14.315 [2024-12-14T15:57:44.401Z] Total : 21188.32 82.77 0.00 0.00 6020.98 4899.60 12732.71 00:44:14.315 { 00:44:14.315 "results": [ 00:44:14.315 { 00:44:14.315 "job": "nvme0n1", 00:44:14.315 "core_mask": "0x2", 00:44:14.315 "workload": "randread", 00:44:14.315 "status": "finished", 00:44:14.315 "queue_depth": 128, 00:44:14.315 "io_size": 4096, 00:44:14.315 "runtime": 1.005979, 00:44:14.315 "iops": 21188.315064230963, 00:44:14.315 "mibps": 82.7668557196522, 00:44:14.315 "io_failed": 0, 00:44:14.315 "io_timeout": 0, 00:44:14.315 "avg_latency_us": 6020.978987388716, 00:44:14.315 "min_latency_us": 4899.596190476191, 00:44:14.315 "max_latency_us": 12732.708571428571 00:44:14.315 } 00:44:14.315 ], 00:44:14.315 "core_count": 1 00:44:14.315 } 00:44:14.315 16:57:44 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:14.315 16:57:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:14.573 16:57:44 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:14.573 16:57:44 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:14.573 16:57:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:14.573 16:57:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:14.573 16:57:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:14.573 16:57:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:14.832 16:57:44 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:14.832 16:57:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:14.832 16:57:44 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:14.832 16:57:44 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:14.832 16:57:44 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:14.832 16:57:44 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:14.832 16:57:44 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:14.832 16:57:44 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:14.832 16:57:44 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:14.832 16:57:44 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:14.832 16:57:44 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:14.832 16:57:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:14.832 [2024-12-14 16:57:44.876227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:14.832 [2024-12-14 16:57:44.876852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf403d0 (107): Transport endpoint is not connected 00:44:14.832 [2024-12-14 16:57:44.877847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf403d0 (9): Bad file descriptor 00:44:14.832 [2024-12-14 16:57:44.878848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:14.832 [2024-12-14 16:57:44.878859] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:14.832 [2024-12-14 16:57:44.878866] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:14.832 [2024-12-14 16:57:44.878875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:14.832 request: 00:44:14.832 { 00:44:14.832 "name": "nvme0", 00:44:14.832 "trtype": "tcp", 00:44:14.832 "traddr": "127.0.0.1", 00:44:14.832 "adrfam": "ipv4", 00:44:14.832 "trsvcid": "4420", 00:44:14.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:14.832 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:14.832 "prchk_reftag": false, 00:44:14.832 "prchk_guard": false, 00:44:14.832 "hdgst": false, 00:44:14.832 "ddgst": false, 00:44:14.832 "psk": ":spdk-test:key1", 00:44:14.832 "allow_unrecognized_csi": false, 00:44:14.832 "method": "bdev_nvme_attach_controller", 00:44:14.832 "req_id": 1 00:44:14.832 } 00:44:14.833 Got JSON-RPC error response 00:44:14.833 response: 00:44:14.833 { 00:44:14.833 "code": -5, 00:44:14.833 "message": "Input/output error" 00:44:14.833 } 00:44:14.833 16:57:44 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:14.833 16:57:44 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:14.833 16:57:44 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:14.833 16:57:44 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@33 -- # sn=217242555 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 217242555 00:44:14.833 1 links removed 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@33 -- # sn=732749813 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 732749813 00:44:14.833 1 links removed 00:44:14.833 16:57:44 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1328951 00:44:14.833 16:57:44 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1328951 ']' 00:44:14.833 16:57:44 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1328951 00:44:15.092 16:57:44 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:15.092 16:57:44 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:15.092 16:57:44 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1328951 00:44:15.092 16:57:44 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:15.092 16:57:44 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:15.092 16:57:44 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1328951' 00:44:15.092 killing process with pid 1328951 00:44:15.092 16:57:44 keyring_linux -- common/autotest_common.sh@973 -- # kill 1328951 00:44:15.092 Received shutdown signal, test time was about 1.000000 seconds 00:44:15.092 00:44:15.092 Latency(us) 00:44:15.092 [2024-12-14T15:57:45.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:15.092 [2024-12-14T15:57:45.178Z] =================================================================================================================== 00:44:15.092 [2024-12-14T15:57:45.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:15.092 16:57:44 keyring_linux -- common/autotest_common.sh@978 -- # wait 1328951 00:44:15.092 16:57:45 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1328943 00:44:15.092 16:57:45 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1328943 ']' 00:44:15.092 16:57:45 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1328943 00:44:15.092 16:57:45 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:15.092 16:57:45 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:15.092 16:57:45 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1328943 00:44:15.092 16:57:45 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:15.092 16:57:45 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:15.092 16:57:45 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1328943' 00:44:15.092 killing process with pid 1328943 00:44:15.092 16:57:45 keyring_linux -- common/autotest_common.sh@973 -- # kill 1328943 00:44:15.092 16:57:45 keyring_linux -- common/autotest_common.sh@978 -- # wait 1328943 00:44:15.659 00:44:15.659 real 0m4.289s 00:44:15.659 user 0m8.104s 00:44:15.659 sys 0m1.450s 00:44:15.659 16:57:45 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:15.659 16:57:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:15.659 ************************************ 00:44:15.659 END TEST keyring_linux 00:44:15.659 ************************************ 00:44:15.659 16:57:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:15.659 16:57:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:15.659 16:57:45 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:15.659 16:57:45 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:15.659 16:57:45 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:15.659 16:57:45 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:15.659 16:57:45 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:15.659 16:57:45 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:15.659 16:57:45 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:15.659 16:57:45 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:15.659 16:57:45 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:15.659 16:57:45 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:15.659 16:57:45 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:15.659 16:57:45 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:15.659 16:57:45 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:15.659 16:57:45 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:15.659 16:57:45 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:15.659 16:57:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:15.659 16:57:45 -- common/autotest_common.sh@10 -- # set +x 00:44:15.659 16:57:45 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:15.659 16:57:45 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:15.659 16:57:45 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:15.659 16:57:45 -- common/autotest_common.sh@10 -- # set +x 00:44:20.935 INFO: APP EXITING 00:44:20.936 INFO: killing all VMs 00:44:20.936 INFO: killing vhost app 00:44:20.936 INFO: EXIT DONE 00:44:23.473 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:23.473 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:23.473 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:23.473 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:23.733 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:23.992 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:27.284 Cleaning 00:44:27.284 Removing: /var/run/dpdk/spdk0/config 00:44:27.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:27.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:27.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:27.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:27.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:27.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:27.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:27.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:27.284 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:27.284 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:27.284 Removing: /var/run/dpdk/spdk1/config 00:44:27.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:27.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:27.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:27.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:27.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:27.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:27.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:27.284 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:27.284 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:27.284 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:27.284 Removing: /var/run/dpdk/spdk2/config 00:44:27.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:27.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:27.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:27.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:27.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:27.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:27.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:27.284 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:27.284 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:27.284 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:27.284 Removing: /var/run/dpdk/spdk3/config 00:44:27.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:27.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:27.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:27.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:27.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:27.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:27.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:27.284 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:27.284 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:27.284 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:27.284 Removing: /var/run/dpdk/spdk4/config 00:44:27.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:27.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:27.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:27.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:27.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:27.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:27.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:27.284 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:27.284 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:27.284 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:27.284 Removing: /dev/shm/bdev_svc_trace.1 00:44:27.284 Removing: /dev/shm/nvmf_trace.0 00:44:27.284 Removing: /dev/shm/spdk_tgt_trace.pid772974 00:44:27.284 Removing: /var/run/dpdk/spdk0 00:44:27.284 Removing: /var/run/dpdk/spdk1 00:44:27.284 Removing: /var/run/dpdk/spdk2 00:44:27.284 Removing: /var/run/dpdk/spdk3 00:44:27.284 Removing: /var/run/dpdk/spdk4 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1011065 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1015444 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1017222 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1019371 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1019524 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1019753 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1019768 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1020257 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1022041 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1022787 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1023280 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1025319 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1025803 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1026495 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1030473 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1035753 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1035754 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1035755 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1039555 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1043363 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1048233 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1083476 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1087456 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1093309 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1094446 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1095863 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1097541 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1102146 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1106381 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1110167 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1117608 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1117610 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1122030 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1122251 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1122471 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1122918 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1122927 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1124277 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1125843 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1127503 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1129173 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1130727 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1132380 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1138240 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1138796 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1140627 00:44:27.284 Removing: /var/run/dpdk/spdk_pid1142042 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1147633 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1150304 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1155588 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1160618 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1169183 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1176263 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1176266 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1194988 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1195447 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1196113 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1196579 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1197294 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1197754 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1198255 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1198886 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1202864 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1203146 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1209033 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1209297 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1214444 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1218591 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1228189 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1228895 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1233456 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1233700 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1237706 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1243307 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1245838 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1255631 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1264154 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1265718 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1266613 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1282945 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1286679 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1289316 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1296886 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1296991 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1301995 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1303816 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1305679 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1306888 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1308815 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1309925 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1318420 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1318866 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1319375 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1322189 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1322707 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1323160 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1326909 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1326922 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1328400 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1328943 00:44:27.285 Removing: /var/run/dpdk/spdk_pid1328951 00:44:27.285 Removing: /var/run/dpdk/spdk_pid770578 00:44:27.285 Removing: /var/run/dpdk/spdk_pid771656 00:44:27.285 Removing: /var/run/dpdk/spdk_pid772974 00:44:27.285 Removing: /var/run/dpdk/spdk_pid773820 00:44:27.285 Removing: /var/run/dpdk/spdk_pid774742 00:44:27.285 Removing: /var/run/dpdk/spdk_pid774852 00:44:27.285 Removing: /var/run/dpdk/spdk_pid775875 00:44:27.285 Removing: /var/run/dpdk/spdk_pid775932 00:44:27.285 Removing: /var/run/dpdk/spdk_pid776278 00:44:27.285 Removing: /var/run/dpdk/spdk_pid777754 00:44:27.285 Removing: /var/run/dpdk/spdk_pid779006 00:44:27.285 Removing: /var/run/dpdk/spdk_pid779421 00:44:27.285 Removing: /var/run/dpdk/spdk_pid779584 00:44:27.285 Removing: /var/run/dpdk/spdk_pid779873 00:44:27.285 Removing: /var/run/dpdk/spdk_pid780157 00:44:27.285 Removing: /var/run/dpdk/spdk_pid780403 00:44:27.285 Removing: /var/run/dpdk/spdk_pid780649 00:44:27.285 Removing: /var/run/dpdk/spdk_pid780931 00:44:27.285 Removing: /var/run/dpdk/spdk_pid781652 00:44:27.285 Removing: /var/run/dpdk/spdk_pid784577 00:44:27.545 Removing: /var/run/dpdk/spdk_pid784846 00:44:27.545 Removing: /var/run/dpdk/spdk_pid785108 00:44:27.545 Removing: /var/run/dpdk/spdk_pid785114 00:44:27.545 Removing: /var/run/dpdk/spdk_pid785590 00:44:27.545 Removing: /var/run/dpdk/spdk_pid785603 00:44:27.545 Removing: /var/run/dpdk/spdk_pid786081 00:44:27.545 Removing: /var/run/dpdk/spdk_pid786086 00:44:27.545 Removing: /var/run/dpdk/spdk_pid786475 00:44:27.545 Removing: /var/run/dpdk/spdk_pid786561 00:44:27.545 Removing: /var/run/dpdk/spdk_pid786808 00:44:27.545 Removing: /var/run/dpdk/spdk_pid786823 00:44:27.545 Removing: /var/run/dpdk/spdk_pid787370 00:44:27.545 Removing: /var/run/dpdk/spdk_pid787556 00:44:27.545 Removing: /var/run/dpdk/spdk_pid787888 00:44:27.545 Removing: /var/run/dpdk/spdk_pid791551 00:44:27.545 Removing: /var/run/dpdk/spdk_pid795750 00:44:27.545 Removing: /var/run/dpdk/spdk_pid805823 00:44:27.545 Removing: /var/run/dpdk/spdk_pid806458 00:44:27.545 Removing: /var/run/dpdk/spdk_pid810699 00:44:27.545 Removing: /var/run/dpdk/spdk_pid810939 00:44:27.545 Removing: /var/run/dpdk/spdk_pid815250 00:44:27.545 Removing: /var/run/dpdk/spdk_pid821401 00:44:27.545 Removing: /var/run/dpdk/spdk_pid824060 00:44:27.545 Removing: /var/run/dpdk/spdk_pid834170 00:44:27.545 Removing: /var/run/dpdk/spdk_pid842934 00:44:27.545 Removing: /var/run/dpdk/spdk_pid844716 00:44:27.545 Removing: /var/run/dpdk/spdk_pid845620 00:44:27.545 Removing: /var/run/dpdk/spdk_pid862285 00:44:27.545 Removing: /var/run/dpdk/spdk_pid866646 00:44:27.545 Removing: /var/run/dpdk/spdk_pid948511 00:44:27.545 Removing: /var/run/dpdk/spdk_pid953631 00:44:27.545 Removing: /var/run/dpdk/spdk_pid959482 00:44:27.545 Removing: /var/run/dpdk/spdk_pid965617 00:44:27.545 Removing: /var/run/dpdk/spdk_pid965678 00:44:27.545 Removing: /var/run/dpdk/spdk_pid966512 00:44:27.545 Removing: /var/run/dpdk/spdk_pid967405 00:44:27.545 Removing: /var/run/dpdk/spdk_pid968294 00:44:27.545 Removing: /var/run/dpdk/spdk_pid968748 00:44:27.545 Removing: /var/run/dpdk/spdk_pid968853 00:44:27.545 Removing: /var/run/dpdk/spdk_pid969152 00:44:27.545 Removing: /var/run/dpdk/spdk_pid969204 00:44:27.545 Removing: /var/run/dpdk/spdk_pid969209 00:44:27.545 Removing: /var/run/dpdk/spdk_pid970094 00:44:27.545 Removing: /var/run/dpdk/spdk_pid970981 00:44:27.545 Removing: /var/run/dpdk/spdk_pid971871 00:44:27.545 Removing: /var/run/dpdk/spdk_pid972331 00:44:27.545 Removing: /var/run/dpdk/spdk_pid972334 00:44:27.545 Removing: /var/run/dpdk/spdk_pid972655 00:44:27.545 Removing: /var/run/dpdk/spdk_pid973758 00:44:27.545 Removing: /var/run/dpdk/spdk_pid974717 00:44:27.545 Removing: /var/run/dpdk/spdk_pid982912 00:44:27.545 Clean 00:44:27.804 16:57:57 -- common/autotest_common.sh@1453 -- # return 0 00:44:27.804 16:57:57 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:44:27.804 16:57:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:27.804 16:57:57 -- common/autotest_common.sh@10 -- # set +x 00:44:27.804 16:57:57 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:44:27.804 16:57:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:27.804 16:57:57 -- common/autotest_common.sh@10 -- # set +x 00:44:27.804 16:57:57 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:27.804 16:57:57 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:27.804 16:57:57 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:27.804 16:57:57 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:44:27.804 16:57:57 -- spdk/autotest.sh@398 -- # hostname 00:44:27.804 16:57:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:27.804 geninfo: WARNING: invalid characters removed from testname! 00:44:49.774 16:58:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:51.153 16:58:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:53.058 16:58:22 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:54.965 16:58:24 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:56.870 16:58:26 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:58.249 16:58:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:00.156 16:58:30 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:00.156 16:58:30 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:00.156 16:58:30 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:00.156 16:58:30 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:00.156 16:58:30 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:00.156 16:58:30 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:00.156 + [[ -n 676634 ]] 00:45:00.156 + sudo kill 676634 00:45:00.166 [Pipeline] } 00:45:00.182 [Pipeline] // stage 00:45:00.187 [Pipeline] } 00:45:00.202 [Pipeline] // timeout 00:45:00.207 [Pipeline] } 00:45:00.222 [Pipeline] // catchError 00:45:00.227 [Pipeline] } 00:45:00.243 [Pipeline] // wrap 00:45:00.250 [Pipeline] } 00:45:00.263 [Pipeline] // catchError 00:45:00.273 [Pipeline] stage 00:45:00.275 [Pipeline] { (Epilogue) 00:45:00.289 [Pipeline] catchError 00:45:00.291 [Pipeline] { 00:45:00.305 [Pipeline] echo 00:45:00.307 Cleanup processes 00:45:00.314 [Pipeline] sh 00:45:00.601 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:00.601 1340649 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:00.617 [Pipeline] sh 00:45:00.902 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:00.903 ++ grep -v 'sudo pgrep' 00:45:00.903 ++ awk '{print $1}' 00:45:00.903 + sudo kill -9 00:45:00.903 + true 00:45:00.915 [Pipeline] sh 00:45:01.202 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:13.428 [Pipeline] sh 00:45:13.710 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:13.710 Artifacts sizes are good 00:45:13.725 [Pipeline] archiveArtifacts 00:45:13.732 Archiving artifacts 00:45:13.885 [Pipeline] sh 00:45:14.169 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:14.183 [Pipeline] cleanWs 00:45:14.193 [WS-CLEANUP] Deleting project workspace... 00:45:14.193 [WS-CLEANUP] Deferred wipeout is used... 00:45:14.199 [WS-CLEANUP] done 00:45:14.202 [Pipeline] } 00:45:14.219 [Pipeline] // catchError 00:45:14.230 [Pipeline] sh 00:45:14.602 + logger -p user.info -t JENKINS-CI 00:45:14.630 [Pipeline] } 00:45:14.643 [Pipeline] // stage 00:45:14.648 [Pipeline] } 00:45:14.662 [Pipeline] // node 00:45:14.667 [Pipeline] End of Pipeline 00:45:14.718 Finished: SUCCESS